1
|
Weber S, Wyszynski M, Godefroid M, Plattfaut R, Niehaves B. How do medical professionals make sense (or not) of AI? A social-media-based computational grounded theory study and an online survey. Comput Struct Biotechnol J 2024; 24:146-159. [PMID: 38434249 PMCID: PMC10904922 DOI: 10.1016/j.csbj.2024.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 02/14/2024] [Accepted: 02/14/2024] [Indexed: 03/05/2024] Open
Abstract
To investigate opinions and attitudes of medical professionals towards adopting AI-enabled healthcare technologies in their daily business, we used a mixed-methods approach. Study 1 employed a qualitative computational grounded theory approach analyzing 181 Reddit threads in the several subreddits of r/medicine. By utilizing an unsupervised machine learning clustering method, we identified three key themes: (1) consequences of AI, (2) physician-AI relationship, and (3) a proposed way forward. In particular Reddit posts related to the first two themes indicated that the medical professionals' fear of being replaced by AI and skepticism toward AI played a major role in the argumentations. Moreover, the results suggest that this fear is driven by little or moderate knowledge about AI. Posts related to the third theme focused on factual discussions about how AI and medicine have to be designed to become broadly adopted in health care. Study 2 quantitatively examined the relationship between the fear of AI, knowledge about AI, and medical professionals' intention to use AI-enabled technologies in more detail. Results based on a sample of 223 medical professionals who participated in the online survey revealed that the intention to use AI technologies increases with increasing knowledge about AI and that this effect is moderated by the fear of being replaced by AI.
Collapse
Affiliation(s)
- Sebastian Weber
- University of Bremen, Digital Public, Bibliothekstr. 1, 28359 Bremen, Germany
| | - Marc Wyszynski
- University of Bremen, Digital Public, Bibliothekstr. 1, 28359 Bremen, Germany
| | - Marie Godefroid
- University of Siegen, Information Systems, Kohlbettstr. 15, 57072 Siegen, Germany
| | - Ralf Plattfaut
- University of Duisburg-Essen, Information Systems and Transformation Management, Universitätsstr. 9, 45141 Essen, Germany
| | - Bjoern Niehaves
- University of Bremen, Digital Public, Bibliothekstr. 1, 28359 Bremen, Germany
| |
Collapse
|
2
|
Ingvar Å, Oloruntoba A, Sashindranath M, Miller R, Soyer HP, Guitera P, Caccetta T, Shumack S, Abbott L, Arnold C, Lawn C, Button-Sloan A, Janda M, Mar V. Minimum labelling requirements for dermatology artificial intelligence-based Software as Medical Device (SaMD): A consensus statement. Australas J Dermatol 2024; 65:e21-e29. [PMID: 38419186 DOI: 10.1111/ajd.14222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 01/21/2024] [Indexed: 03/02/2024]
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence (AI) holds remarkable potential to improve care delivery in dermatology. End users (health professionals and general public) of AI-based Software as Medical Devices (SaMD) require relevant labelling information to ensure that these devices can be used appropriately. Currently, there are no clear minimum labelling requirements for dermatology AI-based SaMDs. METHODS Common labelling recommendations for AI-based SaMD identified in a recent literature review were evaluated by an Australian expert panel in digital health and dermatology via a modified Delphi consensus process. A nine-point Likert scale was used to indicate importance of 10 items, and voting was conducted to determine the specific characteristics to include for some items. Consensus was achieved when more than 75% of the experts agreed that inclusion of information was necessary. RESULTS There was robust consensus supporting inclusion of all proposed items as minimum labelling requirements; indication for use, intended user, training and test data sets, algorithm design, image processing techniques, clinical validation, performance metrics, limitations, updates and adverse events. Nearly all suggested characteristics of the labelling items received endorsement, except for some characteristics related to performance metrics. Moreover, there was consensus that uniform labelling criteria should apply across all AI categories and risk classes set out by the Therapeutic Goods Administration. CONCLUSIONS This study provides critical evidence for setting labelling standards by the Therapeutic Goods Administration to safeguard patients, health professionals, consumers, industry, and regulatory bodies from AI-based dermatology SaMDs that do not currently provide adequate information about how they were developed and tested.
Collapse
Affiliation(s)
- Åsa Ingvar
- Victorian Melanoma Service, Alfred Health, Melbourne, Victoria, Australia
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
- Department of Dermatology, Skåne University Hospital, Lund, Sweden
- Department of Clinical Sciences, Lund University, Lund, Sweden
| | | | - Maithili Sashindranath
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| | - Robert Miller
- Australasian College of Dermatologists, Sydney, Australia
| | - H Peter Soyer
- Australasian College of Dermatologists, Sydney, Australia
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Queensland, Australia
| | - Pascale Guitera
- Australasian College of Dermatologists, Sydney, Australia
- Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia
- Sydney Melanoma Diagnostic Centre, Royal Prince Alfred Hospital, Camperdown, Victoria, Australia
- Melanoma Institute Australia, The University of Sydney, Sydney, New South Wales, Australia
| | - Tony Caccetta
- Australasian College of Dermatologists, Sydney, Australia
- Perth Dermatology Clinic, Perth, Western Australia, Australia
| | - Stephen Shumack
- Australasian College of Dermatologists, Sydney, Australia
- Royal North Shore Hospital of Sydney, Sydney, New South Wales, Australia
| | - Lisa Abbott
- Australasian College of Dermatologists, Sydney, Australia
- Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia
- The Skin Hospital, Sydney, New South Wales, Australia
| | - Chris Arnold
- BioGrid Australia Ltd, Melbourne, Australia
- Hodgson Associates, Melbourne, Australia
- Australasian Society of Cosmetic Dermatologists, Melbourne, Australia
| | - Craig Lawn
- Melanoma Institute Australia, The University of Sydney, Sydney, New South Wales, Australia
- Centre of Excellence in Melanoma Imaging, Brisbane, Queensland, Australia
| | | | - Monika Janda
- Australasian College of Dermatologists, Sydney, Australia
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Queensland, Australia
- Centre for Health Services Research, The University of Queensland, Brisbane, Queensland, Australia
| | - Victoria Mar
- Victorian Melanoma Service, Alfred Health, Melbourne, Victoria, Australia
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
- Australasian College of Dermatologists, Sydney, Australia
| |
Collapse
|
3
|
Wenderott K, Krups J, Luetkens JA, Weigl M. Radiologists' perspectives on the workflow integration of an artificial intelligence-based computer-aided detection system: A qualitative study. APPLIED ERGONOMICS 2024; 117:104243. [PMID: 38306741 DOI: 10.1016/j.apergo.2024.104243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/18/2023] [Accepted: 01/23/2024] [Indexed: 02/04/2024]
Abstract
In healthcare, artificial intelligence (AI) is expected to improve work processes, yet most research focuses on the technical features of AI rather than its real-world clinical implementation. To evaluate the implementation process of an AI-based computer-aided detection system (AI-CAD) for prostate MRI readings, we interviewed German radiologists in a pre-post design. We embedded our findings in the Model of Workflow Integration and the Technology Acceptance Model to analyze workflow effects, facilitators, and barriers. The most prominent barriers were: (i) a time delay in the work process, (ii) additional work steps to be taken, and (iii) an unstable performance of the AI-CAD. Most frequently named facilitators were (i) good self-organization, and (ii) good usability of the software. Our results underline the importance of a holistic approach to AI implementation considering the sociotechnical work system and provide valuable insights into key factors of the successful adoption of AI technologies in work systems.
Collapse
Affiliation(s)
- Katharina Wenderott
- Institute for Patient Safety, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany.
| | - Jim Krups
- Institute for Patient Safety, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany
| | - Julian A Luetkens
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Germany; Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn, Germany
| | - Matthias Weigl
- Institute for Patient Safety, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany
| |
Collapse
|
4
|
Wei ML, Tada M, So A, Torres R. Artificial intelligence and skin cancer. Front Med (Lausanne) 2024; 11:1331895. [PMID: 38566925 PMCID: PMC10985205 DOI: 10.3389/fmed.2024.1331895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
Artificial intelligence is poised to rapidly reshape many fields, including that of skin cancer screening and diagnosis, both as a disruptive and assistive technology. Together with the collection and availability of large medical data sets, artificial intelligence will become a powerful tool that can be leveraged by physicians in their diagnoses and treatment plans for patients. This comprehensive review focuses on current progress toward AI applications for patients, primary care providers, dermatologists, and dermatopathologists, explores the diverse applications of image and molecular processing for skin cancer, and highlights AI's potential for patient self-screening and improving diagnostic accuracy for non-dermatologists. We additionally delve into the challenges and barriers to clinical implementation, paths forward for implementation and areas of active research.
Collapse
Affiliation(s)
- Maria L. Wei
- Department of Dermatology, University of California, San Francisco, San Francisco, CA, United States
- Dermatology Service, San Francisco VA Health Care System, San Francisco, CA, United States
| | - Mikio Tada
- Institute for Neurodegenerative Diseases, University of California, San Francisco, San Francisco, CA, United States
| | - Alexandra So
- School of Medicine, University of California, San Francisco, San Francisco, CA, United States
| | - Rodrigo Torres
- Dermatology Service, San Francisco VA Health Care System, San Francisco, CA, United States
| |
Collapse
|
5
|
Hashimoto DA, Sambasastry SK, Singh V, Kurada S, Altieri M, Yoshida T, Madani A, Jogan M. A foundation for evaluating the surgical artificial intelligence literature. EUROPEAN JOURNAL OF SURGICAL ONCOLOGY 2024:108014. [PMID: 38360498 DOI: 10.1016/j.ejso.2024.108014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 01/06/2024] [Accepted: 02/09/2024] [Indexed: 02/17/2024]
Abstract
With increasing growth in applications of artificial intelligence (AI) in surgery, it has become essential for surgeons to gain a foundation of knowledge to critically appraise the scientific literature, commercial claims regarding products, and regulatory and legal frameworks that govern the development and use of AI. This guide offers surgeons a framework with which to evaluate manuscripts that incorporate the use of AI. It provides a glossary of common terms, an overview of prerequisite knowledge to maximize understanding of methodology, and recommendations on how to carefully consider each element of a manuscript to assess the quality of the data on which an algorithm was trained, the appropriateness of the methodological approach, the potential for reproducibility of the experiment, and the applicability to surgical practice, including considerations on generalizability and scalability.
Collapse
Affiliation(s)
- Daniel A Hashimoto
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA; Global Surgical AI Collaborative, Toronto, ON, USA.
| | - Sai Koushik Sambasastry
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Vivek Singh
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sruthi Kurada
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Maria Altieri
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Global Surgical AI Collaborative, Toronto, ON, USA
| | - Takuto Yoshida
- Surgical AI Research Academy, Department of Surgery, University Health Network, Toronto, ON, USA
| | - Amin Madani
- Global Surgical AI Collaborative, Toronto, ON, USA; Surgical AI Research Academy, Department of Surgery, University Health Network, Toronto, ON, USA
| | - Matjaz Jogan
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
6
|
Sibbald M, Zwaan L, Yilmaz Y, Lal S. Incorporating artificial intelligence in medical diagnosis: A case for an invisible and (un)disruptive approach. J Eval Clin Pract 2024; 30:3-8. [PMID: 35761764 DOI: 10.1111/jep.13730] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 06/13/2022] [Indexed: 12/30/2022]
Abstract
As big data becomes more publicly accessible, artificial intelligence (AI) is increasingly available and applicable to problems around clinical decision-making. Yet the adoption of AI technology in healthcare lags well behind other industries. The gap between what technology could do, and what technology is actually being used for is rapidly widening. While many solutions are proposed to address this gap, clinician resistance to the adoption of AI remains high. To aid with change, we propose facilitating clinician decisions through technology by seamlessly weaving what we call 'invisible AI' into existing clinician workflows, rather than sequencing new steps into clinical processes. We explore evidence from the change management and human factors literature to conceptualize a new approach to AI implementation in health organizations. We discuss challenges and provide recommendations for organizations to employ this strategy.
Collapse
Affiliation(s)
- Matt Sibbald
- Department of Medicine, McMaster Education Research Innovation and Theory (MERIT) Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Laura Zwaan
- Erasmus Medical Center, Institute of Medical Education Research Rotterdam (iMERR), Rotterdam, The Netherlands
| | - Yusuf Yilmaz
- McMaster Education Research Innovation and Theory (MERIT) Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Continuing Professional Development Office, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Department of Medical Education, Faculty of Medicine, Ege University, Izmir, Turkey
| | - Sarrah Lal
- Department of Medicine, Division of Innovation and Education, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
7
|
Evans H, Snead D. Why do errors arise in artificial intelligence diagnostic tools in histopathology and how can we minimize them? Histopathology 2024; 84:279-287. [PMID: 37921030 DOI: 10.1111/his.15071] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/22/2023] [Accepted: 09/27/2023] [Indexed: 11/04/2023]
Abstract
Artificial intelligence (AI)-based diagnostic tools can offer numerous benefits to the field of histopathology, including improved diagnostic accuracy, efficiency and productivity. As a result, such tools are likely to have an increasing role in routine practice. However, all AI tools are prone to errors, and these AI-associated errors have been identified as a major risk in the introduction of AI into healthcare. The errors made by AI tools are different, in terms of both cause and nature, to the errors made by human pathologists. As highlighted by the National Institute for Health and Care Excellence, it is imperative that practising pathologists understand the potential limitations of AI tools, including the errors made. Pathologists are in a unique position to be gatekeepers of AI tool use, maximizing patient benefit while minimizing harm. Furthermore, their pathological knowledge is essential to understanding when, and why, errors have occurred and so to developing safer future algorithms. This paper summarises the literature on errors made by AI diagnostic tools in histopathology. These include erroneous errors, data concerns (data bias, hidden stratification, data imbalances, distributional shift, and lack of generalisability), reinforcement of outdated practices, unsafe failure mode, automation bias, and insensitivity to impact. Methods to reduce errors in both tool design and clinical use are discussed, and the practical roles for pathologists in error minimisation are highlighted. This aims to inform and empower pathologists to move safely through this seismic change in practice and help ensure that novel AI tools are adopted safely.
Collapse
Affiliation(s)
- Harriet Evans
- Histopathology Department, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
- Warwick Medical School, University of Warwick, Coventry, UK
| | - David Snead
- Histopathology Department, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
- Warwick Medical School, University of Warwick, Coventry, UK
| |
Collapse
|
8
|
Vijayakumar S, Lee VV, Leong QY, Hong SJ, Blasiak A, Ho D. Physicians' Perspectives on AI in Clinical Decision Support Systems: Interview Study of the CURATE.AI Personalized Dose Optimization Platform. JMIR Hum Factors 2023; 10:e48476. [PMID: 37902825 PMCID: PMC10644191 DOI: 10.2196/48476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 08/24/2023] [Accepted: 09/10/2023] [Indexed: 10/31/2023] Open
Abstract
BACKGROUND Physicians play a key role in integrating new clinical technology into care practices through user feedback and growth propositions to developers of the technology. As physicians are stakeholders involved through the technology iteration process, understanding their roles as users can provide nuanced insights into the workings of these technologies that are being explored. Therefore, understanding physicians' perceptions can be critical toward clinical validation, implementation, and downstream adoption. Given the increasing prevalence of clinical decision support systems (CDSSs), there remains a need to gain an in-depth understanding of physicians' perceptions and expectations toward their downstream implementation. This paper explores physicians' perceptions of integrating CURATE.AI, a novel artificial intelligence (AI)-based and clinical stage personalized dosing CDSSs, into clinical practice. OBJECTIVE This study aims to understand physicians' perspectives of integrating CURATE.AI for clinical work and to gather insights on considerations of the implementation of AI-based CDSS tools. METHODS A total of 12 participants completed semistructured interviews examining their knowledge, experience, attitudes, risks, and future course of the personalized combination therapy dosing platform, CURATE.AI. Interviews were audio recorded, transcribed verbatim, and coded manually. The data were thematically analyzed. RESULTS Overall, 3 broad themes and 9 subthemes were identified through thematic analysis. The themes covered considerations that physicians perceived as significant across various stages of new technology development, including trial, clinical implementation, and mass adoption. CONCLUSIONS The study laid out the various ways physicians interpreted an AI-based personalized dosing CDSS, CURATE.AI, for their clinical practice. The research pointed out that physicians' expectations during the different stages of technology exploration can be nuanced and layered with expectations of implementation that are relevant for technology developers and researchers.
Collapse
Affiliation(s)
- Smrithi Vijayakumar
- The N.1 Institute for Health, National University of Singapore, Singapore, Singapore
| | - V Vien Lee
- The N.1 Institute for Health, National University of Singapore, Singapore, Singapore
| | - Qiao Ying Leong
- The N.1 Institute for Health, National University of Singapore, Singapore, Singapore
| | - Soo Jung Hong
- Department of Communications and New Media, National University of Singapore, Singapore, Singapore
| | - Agata Blasiak
- The N.1 Institute for Health, National University of Singapore, Singapore, Singapore
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore
- The Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Dean Ho
- The N.1 Institute for Health, National University of Singapore, Singapore, Singapore
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore
- The Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
9
|
Cabral BP, Braga LAM, Syed-Abdul S, Mota FB. Future of Artificial Intelligence Applications in Cancer Care: A Global Cross-Sectional Survey of Researchers. Curr Oncol 2023; 30:3432-3446. [PMID: 36975473 PMCID: PMC10047823 DOI: 10.3390/curroncol30030260] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 03/07/2023] [Accepted: 03/11/2023] [Indexed: 03/18/2023] Open
Abstract
Cancer significantly contributes to global mortality, with 9.3 million annual deaths. To alleviate this burden, the utilization of artificial intelligence (AI) applications has been proposed in various domains of oncology. However, the potential applications of AI and the barriers to its widespread adoption remain unclear. This study aimed to address this gap by conducting a cross-sectional, global, web-based survey of over 1000 AI and cancer researchers. The results indicated that most respondents believed AI would positively impact cancer grading and classification, follow-up services, and diagnostic accuracy. Despite these benefits, several limitations were identified, including difficulties incorporating AI into clinical practice and the lack of standardization in cancer health data. These limitations pose significant challenges, particularly regarding testing, validation, certification, and auditing AI algorithms and systems. The results of this study provide valuable insights for informed decision-making for stakeholders involved in AI and cancer research and development, including individual researchers and research funding agencies.
Collapse
Affiliation(s)
| | - Luiza Amara Maciel Braga
- Laboratory of Cellular Communication, Oswaldo Cruz Institute, Oswaldo Cruz Foundation, Rio de Janeiro 21040-360, Brazil
| | - Shabbir Syed-Abdul
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan
- School of Gerontology and Long-Term Care, College of Nursing, Taipei Medical University, Taipei 110, Taiwan
- Correspondence: (S.S.-A.); (F.B.M.)
| | - Fabio Batista Mota
- Laboratory of Cellular Communication, Oswaldo Cruz Institute, Oswaldo Cruz Foundation, Rio de Janeiro 21040-360, Brazil
- Correspondence: (S.S.-A.); (F.B.M.)
| |
Collapse
|
10
|
Frisinger A, Papachristou P. The voice of healthcare: introducing digital decision support systems into clinical practice - a qualitative study. BMC PRIMARY CARE 2023; 24:67. [PMID: 36907875 PMCID: PMC10008705 DOI: 10.1186/s12875-023-02024-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 03/01/2023] [Indexed: 03/14/2023]
Abstract
BACKGROUND There is a need to accelerate digital transformation in healthcare to meet increasing needs and demands. The accuracy of medical digital diagnosis tools is improving. The introduction of new technology in healthcare can however be challenging and it is unclear how it should be done to reach desired results. The aim of this study was to explore perceptions and experiences of introducing new Information Technology (IT) in a primary healthcare organisation, exemplified with a Clinical Decision Support System (CDSS) for malignant melanoma. METHODS A qualitative interview-based study was performed in Region Stockholm, Sweden, with fifteen medical doctors representing three different organisational levels - primary care physician, primary healthcare centre manager, and regional manager/chief medical officer. In addition, one software provider was included. Interview data were analysed according to content analysis. RESULTS One central theme "Introduction of digital CDSS in primary healthcare requires a multidimensional perspective and handling" along with seven main categories and thirty-three subcategories emerged from the analysis. Digital transformation showed to be key for current healthcare providers to stay relevant and competitive. However, healthcare represents a closed community, very capable but with lack of time, fostered to be sceptical to new why change needs to bring true value and be inspired by people with medical background to motivate the powerful frontline. CONCLUSIONS This qualitative study revealed structured information of what goes wrong and right and what needs to be considered when driving digital change in primary care organisations. The task shows to be complex and the importance of listening to the voice of healthcare is valuable for understanding the conditions that need to be fulfilled when adopting new technology into a healthcare organization. By considering the findings of this study upcoming digital transformations can improve their success-rate. The information may also be used in developing a holistic approach or framework model, adapted to primary health care, that can support and accelerate the needed digitalization in healthcare as such.
Collapse
Affiliation(s)
- Ann Frisinger
- Study Programme in Medicine, Karolinska Institutet, Stockholm, Sweden.
| | - Panagiotis Papachristou
- Division of Family Medicine and Primary Care, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, SE-141 83, Stockholm, Sweden.
| |
Collapse
|
11
|
Nittas V, Daniore P, Landers C, Gille F, Amann J, Hubbs S, Puhan MA, Vayena E, Blasimme A. Beyond high hopes: A scoping review of the 2019-2021 scientific discourse on machine learning in medical imaging. PLOS DIGITAL HEALTH 2023; 2:e0000189. [PMID: 36812620 PMCID: PMC9931290 DOI: 10.1371/journal.pdig.0000189] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 01/02/2023] [Indexed: 02/04/2023]
Abstract
Machine learning has become a key driver of the digital health revolution. That comes with a fair share of high hopes and hype. We conducted a scoping review on machine learning in medical imaging, providing a comprehensive outlook of the field's potential, limitations, and future directions. Most reported strengths and promises included: improved (a) analytic power, (b) efficiency (c) decision making, and (d) equity. Most reported challenges included: (a) structural barriers and imaging heterogeneity, (b) scarcity of well-annotated, representative and interconnected imaging datasets (c) validity and performance limitations, including bias and equity issues, and (d) the still missing clinical integration. The boundaries between strengths and challenges, with cross-cutting ethical and regulatory implications, remain blurred. The literature emphasizes explainability and trustworthiness, with a largely missing discussion about the specific technical and regulatory challenges surrounding these concepts. Future trends are expected to shift towards multi-source models, combining imaging with an array of other data, in a more open access, and explainable manner.
Collapse
Affiliation(s)
- Vasileios Nittas
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
- Epidemiology, Biostatistics and Prevention Institute, Faculty of Medicine, Faculty of Science, University of Zurich, Zurich, Switzerland
| | - Paola Daniore
- Institute for Implementation Science in Health Care, Faculty of Medicine, University of Zurich, Switzerland
- Digital Society Initiative, University of Zurich, Switzerland
| | - Constantin Landers
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| | - Felix Gille
- Institute for Implementation Science in Health Care, Faculty of Medicine, University of Zurich, Switzerland
- Digital Society Initiative, University of Zurich, Switzerland
| | - Julia Amann
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| | - Shannon Hubbs
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| | - Milo Alan Puhan
- Epidemiology, Biostatistics and Prevention Institute, Faculty of Medicine, Faculty of Science, University of Zurich, Zurich, Switzerland
| | - Effy Vayena
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| | - Alessandro Blasimme
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| |
Collapse
|
12
|
Zhang X, Xie Z, Xiang Y, Baig I, Kozman M, Stender C, Giancardo L, Tao C. Issues in Melanoma Detection: Semisupervised Deep Learning Algorithm Development via a Combination of Human and Artificial Intelligence. JMIR DERMATOLOGY 2022; 5:e39113. [PMID: 37632881 PMCID: PMC10334941 DOI: 10.2196/39113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 09/01/2022] [Accepted: 10/12/2022] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Automatic skin lesion recognition has shown to be effective in increasing access to reliable dermatology evaluation; however, most existing algorithms rely solely on images. Many diagnostic rules, including the 3-point checklist, are not considered by artificial intelligence algorithms, which comprise human knowledge and reflect the diagnosis process of human experts. OBJECTIVE In this paper, we aimed to develop a semisupervised model that can not only integrate the dermoscopic features and scoring rule from the 3-point checklist but also automate the feature-annotation process. METHODS We first trained the semisupervised model on a small, annotated data set with disease and dermoscopic feature labels and tried to improve the classification accuracy by integrating the 3-point checklist using ranking loss function. We then used a large, unlabeled data set with only disease label to learn from the trained algorithm to automatically classify skin lesions and features. RESULTS After adding the 3-point checklist to our model, its performance for melanoma classification improved from a mean of 0.8867 (SD 0.0191) to 0.8943 (SD 0.0115) under 5-fold cross-validation. The trained semisupervised model can automatically detect 3 dermoscopic features from the 3-point checklist, with best performances of 0.80 (area under the curve [AUC] 0.8380), 0.89 (AUC 0.9036), and 0.76 (AUC 0.8444), in some cases outperforming human annotators. CONCLUSIONS Our proposed semisupervised learning framework can help with the automatic diagnosis of skin disease based on its ability to detect dermoscopic features and automate the label-annotation process. The framework can also help combine semantic knowledge with a computer algorithm to arrive at a more accurate and more interpretable diagnostic result, which can be applied to broader use cases.
Collapse
Affiliation(s)
- Xinyuan Zhang
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Ziqian Xie
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Yang Xiang
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Imran Baig
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Mena Kozman
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Carly Stender
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Luca Giancardo
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Cui Tao
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
13
|
Oloruntoba AI, Vestergaard T, Nguyen TD, Yu Z, Sashindranath M, Betz-Stablein B, Soyer HP, Ge Z, Mar V. Assessing the Generalizability of Deep Learning Models Trained on Standardized and Nonstandardized Images and Their Performance Against Teledermatologists: Retrospective Comparative Study. JMIR DERMATOLOGY 2022. [DOI: 10.2196/35150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Background
Convolutional neural networks (CNNs) are a type of artificial intelligence that shows promise as a diagnostic aid for skin cancer. However, the majority are trained using retrospective image data sets with varying image capture standardization.
Objective
The aim of our study was to use CNN models with the same architecture—trained on image sets acquired with either the same image capture device and technique (standardized) or with varied devices and capture techniques (nonstandardized)—and test variability in performance when classifying skin cancer images in different populations.
Methods
In all, 3 CNNs with the same architecture were trained. CNN nonstandardized (CNN-NS) was trained on 25,331 images taken from the International Skin Imaging Collaboration (ISIC) using different image capture devices. CNN standardized (CNN-S) was trained on 177,475 MoleMap images taken with the same capture device, and CNN standardized number 2 (CNN-S2) was trained on a subset of 25,331 standardized MoleMap images (matched for number and classes of training images to CNN-NS). These 3 models were then tested on 3 external test sets: 569 Danish images, the publicly available ISIC 2020 data set consisting of 33,126 images, and The University of Queensland (UQ) data set of 422 images. Primary outcome measures were sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). Teledermatology assessments available for the Danish data set were used to determine model performance compared to teledermatologists.
Results
When tested on the 569 Danish images, CNN-S achieved an AUROC of 0.861 (95% CI 0.830-0.889) and CNN-S2 achieved an AUROC of 0.831 (95% CI 0.798-0.861; standardized models), with both outperforming CNN-NS (nonstandardized model; P=.001 and P=.009, respectively), which achieved an AUROC of 0.759 (95% CI 0.722-0.794). When tested on 2 additional data sets (ISIC 2020 and UQ), CNN-S (P<.001 and P<.001, respectively) and CNN-S2 (P=.08 and P=.35, respectively) still outperformed CNN-NS. When the CNNs were matched to the mean sensitivity and specificity of the teledermatologists on the Danish data set, the models’ resultant sensitivities and specificities were surpassed by the teledermatologists. However, when compared to CNN-S, the differences were not statistically significant (sensitivity: P=.10; specificity: P=.053). Performance across all CNN models as well as teledermatologists was influenced by image quality.
Conclusions
CNNs trained on standardized images had improved performance and, therefore, greater generalizability in skin cancer classification when applied to unseen data sets. This finding is an important consideration for future algorithm development, regulation, and approval.
Collapse
|
14
|
Vasey B, Nagendran M, Campbell B, Clifton DA, Collins GS, Denaxas S, Denniston AK, Faes L, Geerts B, Ibrahim M, Liu X, Mateen BA, Mathur P, McCradden MD, Morgan L, Ordish J, Rogers C, Saria S, Ting DSW, Watkinson P, Weber W, Wheatstone P, McCulloch P. Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ 2022; 377:e070904. [PMID: 35584845 PMCID: PMC9116198 DOI: 10.1136/bmj-2022-070904] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/26/2022] [Indexed: 01/04/2023]
Affiliation(s)
- Baptiste Vasey
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
- Critical Care Research Group, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Myura Nagendran
- UKRI Centre for Doctoral Training in AI for Healthcare, Imperial College London, London, UK
| | - Bruce Campbell
- University of Exeter Medical School, Exeter, UK
- Royal Devon and Exeter Hospital, Exeter, UK
| | - David A Clifton
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Gary S Collins
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Spiros Denaxas
- Institute of Health Informatics, University College London, London, UK
- British Heart Foundation Data Science Centre, London, UK
- Health Data Research UK, London, UK
- UCL Hospitals Biomedical Research Centre, London, UK
| | - Alastair K Denniston
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Livia Faes
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | | | - Mudathir Ibrahim
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
- Department of Surgery, Maimonides Medical Center, New York, NY, USA
| | - Xiaoxuan Liu
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Bilal A Mateen
- Institute of Health Informatics, University College London, London, UK
- Wellcome Trust, London, UK
- Alan Turing Institute, London, UK
| | - Piyush Mathur
- Department of General Anesthesiology, Anesthesiology Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Melissa D McCradden
- Hospital for Sick Children, Toronto, ON, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | | | - Johan Ordish
- The Medicines and Healthcare products Regulatory Agency, London, UK
| | | | - Suchi Saria
- Departments of Computer Science, Statistics, and Health Policy, and Division of Informatics, Johns Hopkins University, Baltimore, MD, USA
- Bayesian Health, New York, NY, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Peter Watkinson
- Critical Care Research Group, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
- NIHR Biomedical Research Centre Oxford, Oxford University Hospitals NHS Trust, Oxford, UK
| | | | | | - Peter McCulloch
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
| |
Collapse
|
15
|
Vasey B, Nagendran M, Campbell B, Clifton DA, Collins GS, Denaxas S, Denniston AK, Faes L, Geerts B, Ibrahim M, Liu X, Mateen BA, Mathur P, McCradden MD, Morgan L, Ordish J, Rogers C, Saria S, Ting DSW, Watkinson P, Weber W, Wheatstone P, McCulloch P. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat Med 2022; 28:924-933. [PMID: 35585198 DOI: 10.1038/s41591-022-01772-9] [Citation(s) in RCA: 115] [Impact Index Per Article: 57.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Accepted: 03/03/2022] [Indexed: 12/31/2022]
Abstract
A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico evaluation, but few have yet demonstrated real benefit to patient care. Early-stage clinical evaluation is important to assess an AI system's actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use and pave the way to further large-scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multi-stakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two-round, modified Delphi process to collect and analyze expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 pre-defined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. In total, 123 experts participated in the first round of Delphi, 138 in the second round, 16 in the consensus meeting and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI-specific reporting items (made of 28 subitems) and ten generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we developed a guideline comprising key items that should be reported in early-stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings.
Collapse
Affiliation(s)
- Baptiste Vasey
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK.
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
- Critical Care Research Group, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK.
| | - Myura Nagendran
- UKRI Centre for Doctoral Training in AI for Healthcare, Imperial College London, London, UK
| | - Bruce Campbell
- University of Exeter Medical School, Exeter, UK
- Royal Devon and Exeter Hospital, Exeter, UK
| | - David A Clifton
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Gary S Collins
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Spiros Denaxas
- Institute of Health Informatics, University College London, London, UK
- British Heart Foundation Data Science Centre, London, UK
- Health Data Research UK, London, UK
- UCL Hospitals Biomedical Research Centre, London, UK
| | - Alastair K Denniston
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Livia Faes
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Bart Geerts
- Healthplus.ai-R&D BV, Amsterdam, The Netherlands
| | - Mudathir Ibrahim
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
- Department of Surgery, Maimonides Medical Center, Brooklyn, NY, USA
| | - Xiaoxuan Liu
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Bilal A Mateen
- Institute of Health Informatics, University College London, London, UK
- The Wellcome Trust, London, UK
- The Alan Turing Institute, London, UK
| | - Piyush Mathur
- Department of General Anesthesiology, Anesthesiology Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Melissa D McCradden
- The Hospital for Sick Children, Toronto ON, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto ON, Canada
| | | | - Johan Ordish
- Medicines and Healthcare products Regulatory Agency, London, UK
| | | | - Suchi Saria
- Departments of Computer Science, Statistics, and Health Policy, and Division of Informatics, Johns Hopkins University, Baltimore, MD, USA
- Bayesian Health, New York, NY, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Peter Watkinson
- Critical Care Research Group, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
- NIHR Biomedical Research Centre Oxford, Oxford University Hospitals NHS Trust, Oxford, UK
| | | | | | - Peter McCulloch
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
| |
Collapse
|
16
|
|
17
|
Buck C, Doctor E, Hennrich J, Jöhnk J, Eymann T. General Practitioners' Attitudes Toward Artificial Intelligence-Enabled Systems: Interview Study. J Med Internet Res 2022; 24:e28916. [PMID: 35084342 PMCID: PMC8832268 DOI: 10.2196/28916] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 06/24/2021] [Accepted: 11/21/2021] [Indexed: 01/14/2023] Open
Abstract
Background General practitioners (GPs) care for a large number of patients with various diseases in very short timeframes under high uncertainty. Thus, systems enabled by artificial intelligence (AI) are promising and time-saving solutions that may increase the quality of care. Objective This study aims to understand GPs’ attitudes toward AI-enabled systems in medical diagnosis. Methods We interviewed 18 GPs from Germany between March 2020 and May 2020 to identify determinants of GPs’ attitudes toward AI-based systems in diagnosis. By analyzing the interview transcripts, we identified 307 open codes, which we then further structured to derive relevant attitude determinants. Results We merged the open codes into 21 concepts and finally into five categories: concerns, expectations, environmental influences, individual characteristics, and minimum requirements of AI-enabled systems. Concerns included all doubts and fears of the participants regarding AI-enabled systems. Expectations reflected GPs’ thoughts and beliefs about expected benefits and limitations of AI-enabled systems in terms of GP care. Environmental influences included influences resulting from an evolving working environment, key stakeholders’ perspectives and opinions, the available information technology hardware and software resources, and the media environment. Individual characteristics were determinants that describe a physician as a person, including character traits, demographic characteristics, and knowledge. In addition, the interviews also revealed the minimum requirements of AI-enabled systems, which were preconditions that must be met for GPs to contemplate using AI-enabled systems. Moreover, we identified relationships among these categories, which we conflate in our proposed model. Conclusions This study provides a thorough understanding of the perspective of future users of AI-enabled systems in primary care and lays the foundation for successful market penetration. We contribute to the research stream of analyzing and designing AI-enabled systems and the literature on attitudes toward technology and practice by fostering the understanding of GPs and their attitudes toward such systems. Our findings provide relevant information to technology developers, policymakers, and stakeholder institutions of GP care.
Collapse
Affiliation(s)
- Christoph Buck
- Department of Business & Information Systems Engineering, University of Bayreuth, Bayreuth, Germany.,Centre for Future Enterprise, Queensland University of Technology, Brisbane, Australia
| | - Eileen Doctor
- Project Group Business & Information Systems Engineering, Fraunhofer Institute for Applied Information Technology, Bayreuth, Germany
| | - Jasmin Hennrich
- Project Group Business & Information Systems Engineering, Fraunhofer Institute for Applied Information Technology, Bayreuth, Germany
| | - Jan Jöhnk
- Finance & Information Management Research Center, Bayreuth, Germany
| | - Torsten Eymann
- Department of Business & Information Systems Engineering, University of Bayreuth, Bayreuth, Germany.,Finance & Information Management Research Center, Bayreuth, Germany
| |
Collapse
|
18
|
Stiff KM, Franklin MJ, Zhou Y, Madabhushi A, Knackstedt TJ. Artificial Intelligence and Melanoma: A Comprehensive Review of Clinical, Dermoscopic, and Histologic Applications. Pigment Cell Melanoma Res 2022; 35:203-211. [PMID: 35038383 DOI: 10.1111/pcmr.13027] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 11/24/2021] [Accepted: 01/09/2022] [Indexed: 11/30/2022]
Abstract
Melanoma detection, prognosis, and treatment represent challenging and complex areas of cutaneous oncology with considerable impact on patient outcomes and healthcare economics. Artificial intelligence (AI) applications in these tasks are rapidly developing. Neural networks with increasing levels of sophistication are being implemented in clinical image, dermoscopic image, and histopathologic specimen classification of pigmented lesions. These efforts hold promise of earlier and highly accurate melanoma detection, as well as reliable prognostication and prediction of therapeutic response. Herein, we provide a brief introduction to AI, discuss contemporary investigational applications of AI in melanoma, and summarize challenges encountered with AI.
Collapse
Affiliation(s)
| | | | - Yufei Zhou
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland
| | - Thomas J Knackstedt
- Department of Dermatology, MetroHealth System, Cleveland.,School of Medicine, Case Western Reserve University, Cleveland
| |
Collapse
|
19
|
Aggarwal P, Papay FA. Artificial intelligence image recognition of melanoma and basal cell carcinoma in racially diverse populations. J DERMATOL TREAT 2021; 33:2257-2262. [PMID: 34154489 DOI: 10.1080/09546634.2021.1944970] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
BACKGROUND Artificial intelligence (AI) image recognition models have been relatively successful in diagnosing cutaneous manifestations in individuals with light skin tone. However, when these models are tested on the same cutaneous manifestations in individuals with darker or brown skin tone, the performance of the model drops due to a paucity of such images available for model training. OBJECTIVE The objective of this study was to improve the performance of AI models in recognizing cutaneous diseases in individuals with darker skin tone. METHODS Unsupervised computer darkening of skin color with preservation of the dermatological disease/lesion characteristics in images of light-skinned individuals with basal cell carcinoma (BCC), and melanoma was performed. RESULTS Training an AI model on these artificially "darkened" images as compared to training on the original "light-skinned" images resulted in a higher sensitivity, specificity, positive predictive value, negative predictive value, F1 score and area under the receiver-operating characteristic curve of the AI model in differentiating between BCC and melanoma in individuals with brown skin tone. CONCLUSION Use of unsupervised image to image translation in medical AI image recognition models has the potential to significantly improve their accuracy in diagnosing diseases in individuals with racially diverse skin tone.
Collapse
Affiliation(s)
- Pushkar Aggarwal
- University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Francis A Papay
- Dermatology and Plastic Surgery Institute, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
20
|
Asan O, Choudhury A. Research Trends in Artificial Intelligence Applications in Human Factors Health Care: Mapping Review. JMIR Hum Factors 2021; 8:e28236. [PMID: 34142968 PMCID: PMC8277302 DOI: 10.2196/28236] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 04/14/2021] [Accepted: 05/03/2021] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND Despite advancements in artificial intelligence (AI) to develop prediction and classification models, little research has been devoted to real-world translations with a user-centered design approach. AI development studies in the health care context have often ignored two critical factors of ecological validity and human cognition, creating challenges at the interface with clinicians and the clinical environment. OBJECTIVE The aim of this literature review was to investigate the contributions made by major human factors communities in health care AI applications. This review also discusses emerging research gaps, and provides future research directions to facilitate a safer and user-centered integration of AI into the clinical workflow. METHODS We performed an extensive mapping review to capture all relevant articles published within the last 10 years in the major human factors journals and conference proceedings listed in the "Human Factors and Ergonomics" category of the Scopus Master List. In each published volume, we searched for studies reporting qualitative or quantitative findings in the context of AI in health care. Studies are discussed based on the key principles such as evaluating workload, usability, trust in technology, perception, and user-centered design. RESULTS Forty-eight articles were included in the final review. Most of the studies emphasized user perception, the usability of AI-based devices or technologies, cognitive workload, and user's trust in AI. The review revealed a nascent but growing body of literature focusing on augmenting health care AI; however, little effort has been made to ensure ecological validity with user-centered design approaches. Moreover, few studies (n=5 against clinical/baseline standards, n=5 against clinicians) compared their AI models against a standard measure. CONCLUSIONS Human factors researchers should actively be part of efforts in AI design and implementation, as well as dynamic assessments of AI systems' effects on interaction, workflow, and patient outcomes. An AI system is part of a greater sociotechnical system. Investigators with human factors and ergonomics expertise are essential when defining the dynamic interaction of AI within each element, process, and result of the work system.
Collapse
Affiliation(s)
- Onur Asan
- School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ, United States
| | - Avishek Choudhury
- School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ, United States
| |
Collapse
|
21
|
Knop M, Weber S, Mueller M, Niehaves B. Human Factors and Technological Characteristics Influencing the Interaction with AI-enabled Clinical Decision Support Systems: A Literature Review (Preprint). JMIR Hum Factors 2021; 9:e28639. [PMID: 35323118 PMCID: PMC8990344 DOI: 10.2196/28639] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 06/02/2021] [Accepted: 02/07/2022] [Indexed: 01/22/2023] Open
Abstract
Background The digitization and automation of diagnostics and treatments promise to alter the quality of health care and improve patient outcomes, whereas the undersupply of medical personnel, high workload on medical professionals, and medical case complexity increase. Clinical decision support systems (CDSSs) have been proven to help medical professionals in their everyday work through their ability to process vast amounts of patient information. However, comprehensive adoption is partially disrupted by specific technological and personal characteristics. With the rise of artificial intelligence (AI), CDSSs have become an adaptive technology with human-like capabilities and are able to learn and change their characteristics over time. However, research has not reflected on the characteristics and factors essential for effective collaboration between human actors and AI-enabled CDSSs. Objective Our study aims to summarize the factors influencing effective collaboration between medical professionals and AI-enabled CDSSs. These factors are essential for medical professionals, management, and technology designers to reflect on the adoption, implementation, and development of an AI-enabled CDSS. Methods We conducted a literature review including 3 different meta-databases, screening over 1000 articles and including 101 articles for full-text assessment. Of the 101 articles, 7 (6.9%) met our inclusion criteria and were analyzed for our synthesis. Results We identified the technological characteristics and human factors that appear to have an essential effect on the collaboration of medical professionals and AI-enabled CDSSs in accordance with our research objective, namely, training data quality, performance, explainability, adaptability, medical expertise, technological expertise, personality, cognitive biases, and trust. Comparing our results with those from research on non-AI CDSSs, some characteristics and factors retain their importance, whereas others gain or lose relevance owing to the uniqueness of human-AI interactions. However, only a few (1/7, 14%) studies have mentioned the theoretical foundations and patient outcomes related to AI-enabled CDSSs. Conclusions Our study provides a comprehensive overview of the relevant characteristics and factors that influence the interaction and collaboration between medical professionals and AI-enabled CDSSs. Rather limited theoretical foundations currently hinder the possibility of creating adequate concepts and models to explain and predict the interrelations between these characteristics and factors. For an appropriate evaluation of the human-AI collaboration, patient outcomes and the role of patients in the decision-making process should be considered.
Collapse
Affiliation(s)
- Michael Knop
- Department of Information Systems, University of Siegen, Siegen, Germany
| | - Sebastian Weber
- Department of Information Systems, University of Siegen, Siegen, Germany
| | - Marius Mueller
- Department of Information Systems, University of Siegen, Siegen, Germany
| | - Bjoern Niehaves
- Department of Information Systems, University of Siegen, Siegen, Germany
| |
Collapse
|