1
|
Tian Tran J, Burghall A, Blydt-Hansen T, Cammer A, Goldberg A, Hamiwka L, Johnson C, Kehler C, Phan V, Rosaasen N, Ruhl M, Strong J, Teoh CW, Wichart J, Mansell H. Exploring the ability of ChatGPT to create quality patient education resources about kidney transplant. PATIENT EDUCATION AND COUNSELING 2024; 129:108400. [PMID: 39232336 DOI: 10.1016/j.pec.2024.108400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 07/08/2024] [Accepted: 08/08/2024] [Indexed: 09/06/2024]
Abstract
BACKGROUND Chat Generative Pre-trained Transformer (ChatGPT) is a language model that may have the potential to revolutionize health care. The study purpose was to test whether ChatGPT could be used to create educational brochures about kidney transplant tailored for three target audiences: caregivers, teens and children. METHODS Using a list of 25 educational topics, standardized prompts were employed to ensure content consistency in ChatGPT generation. An expert panel assessed the accuracy of the content by rating agreement on a Likert scale (1 = <25 % agreement; and 5 = 100 % agreement). The understandability, actionability and readability of the brochures were assessed using the Patient Education Materials Assessment Tool for printable materials (PEMAT-P) and standard readability scales. A caregiver and patient reviewed and provided written feedback. RESULTS We found mean understandability scores of 69 %, 66 %, and 73 % for caregiver, teen, and child brochures respectively, with 90.7 % of the ChatGPT generated brochures scoring 40 % on the actionability scale. Generated caregiver and teen materials achieved readability levels of grades 9-14, while child-specific brochures achieved readability levels of grades 6-11. Brochures were formatted appropriately but lacked depth. CONCLUSION ChatGPT demonstrates potential for rapidly generating patient education materials; however, challenges remain in ensuring content specificity. We share the lessons learned to assist other healthcare providers with using this technology.
Collapse
Affiliation(s)
- Jacqueline Tian Tran
- College of Pharmacy and Nutrition, University of Saskatchewan, Saskatoon, Canada
| | - Ashley Burghall
- College of Pharmacy and Nutrition, University of Saskatchewan, Saskatoon, Canada
| | - Tom Blydt-Hansen
- Division of Nephrology, Department of Pediatrics, University of British Columbia, Vancouver, Canada
| | - Allison Cammer
- College of Pharmacy and Nutrition, University of Saskatchewan, Saskatoon, Canada
| | - Aviva Goldberg
- Section of Nephrology, Department of Pediatrics and Child Health, Children's Hospital, HSC, Winnipeg, Canada; Max Rady College of Medicine, University of Manitoba, Winnipeg, Canada
| | - Lorraine Hamiwka
- Section of Nephrology, Department of Pediatrics, Cumming School of Medicine, University of Calgary, Calgary, Canada
| | | | | | - Véronique Phan
- Division of Nephrology, Department of Paediatrics, CHU Ste Justine, Université de Montréal, Montréal, Canada
| | - Nicola Rosaasen
- College of Pharmacy and Nutrition, University of Saskatchewan, Saskatoon, Canada
| | - Michelle Ruhl
- Division of Nephrology, Department of Pediatrics, Stollery Children's Hospital, University of Alberta, Edmonton, Canada
| | - Julie Strong
- Section of Nephrology, Department of Pediatrics and Child Health, Children's Hospital, HSC, Winnipeg, Canada
| | - Chia Wei Teoh
- Division of Nephrology, Department of Paediatrics, The Hospital for Sick Children, University of Toronto, Toronto, Canada
| | - Jenny Wichart
- Department of Pharmacy, Alberta Health Services, Calgary, Canada
| | - Holly Mansell
- College of Pharmacy and Nutrition, University of Saskatchewan, Saskatoon, Canada.
| |
Collapse
|
2
|
Saeedi S, Aghajanzadeh M. Investigating the role of artificial intelligence in predicting perceived dysphonia level. Eur Arch Otorhinolaryngol 2024:10.1007/s00405-024-08868-7. [PMID: 39174679 DOI: 10.1007/s00405-024-08868-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Accepted: 07/22/2024] [Indexed: 08/24/2024]
Abstract
PURPOSE This study aims to investigate the role of one of these models in the field of voice pathology and compare its performance in distinguishing the perceived dysphonia level. METHODS Demographic information, voice self-assessments, and acoustic measurements related to a sample of 50 adult dysphonic outpatients were presented to ChatGPT and Perplexity AI chatbots, which were interrogated for the perceived dysphonia level. RESULTS The agreement between the auditory-perceptual assessment by experts and ChatGPT and Perplexity AI chatbots, as determined by Cohen's Kappa, was not statistically significant (p = 0.429). There was also a low positive correlation (rs = 0.30, p = 0.03) between the diagnosis made by ChatGPT and Perplexity AI chatbots (rs = 0.30, p = 0.03). CONCLUSION It seems that AI could not play a vital role in helping the voice care teams determine the perceptual level of dysphonia.
Collapse
Affiliation(s)
- Saeed Saeedi
- Independent Researcher in Laryngology, Voice Pathology, and Speech-Language Pathology, Tehran, Iran
| | - Mahshid Aghajanzadeh
- Department of Speech Therapy, School of Rehabilitation, Tehran University of Medical Sciences, Enghelab Avenue, Pitch-e-Shemiran, Tehran, 11489, Iran.
| |
Collapse
|
3
|
Khamassi M, Nahon M, Chatila R. Strong and weak alignment of large language models with human values. Sci Rep 2024; 14:19399. [PMID: 39169090 PMCID: PMC11339283 DOI: 10.1038/s41598-024-70031-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Accepted: 08/12/2024] [Indexed: 08/23/2024] Open
Abstract
Minimizing negative impacts of Artificial Intelligent (AI) systems on human societies without human supervision requires them to be able to align with human values. However, most current work only addresses this issue from a technical point of view, e.g., improving current methods relying on reinforcement learning from human feedback, neglecting what it means and is required for alignment to occur. Here, we propose to distinguish strong and weak value alignment. Strong alignment requires cognitive abilities (either human-like or different from humans) such as understanding and reasoning about agents' intentions and their ability to causally produce desired effects. We argue that this is required for AI systems like large language models (LLMs) to be able to recognize situations presenting a risk that human values may be flouted. To illustrate this distinction, we present a series of prompts showing ChatGPT's, Gemini's and Copilot's failures to recognize some of these situations. We moreover analyze word embeddings to show that the nearest neighbors of some human values in LLMs differ from humans' semantic representations. We then propose a new thought experiment that we call "the Chinese room with a word transition dictionary", in extension of John Searle's famous proposal. We finally mention current promising research directions towards a weak alignment, which could produce statistically satisfying answers in a number of common situations, however so far without ensuring any truth value.
Collapse
Affiliation(s)
- Mehdi Khamassi
- Institute of Intelligent Systems and Robotics, Sorbonne University/CNRS, 75005, Paris, France.
| | - Marceau Nahon
- Institute of Intelligent Systems and Robotics, Sorbonne University/CNRS, 75005, Paris, France.
| | - Raja Chatila
- Institute of Intelligent Systems and Robotics, Sorbonne University/CNRS, 75005, Paris, France.
| |
Collapse
|
4
|
Shin H, De Gagne JC, Kim SS, Hong M. The Impact of Artificial Intelligence-Assisted Learning on Nursing Students' Ethical Decision-making and Clinical Reasoning in Pediatric Care: A Quasi-Experimental Study. Comput Inform Nurs 2024; 42:00024665-990000000-00222. [PMID: 39152099 PMCID: PMC11458082 DOI: 10.1097/cin.0000000000001177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/19/2024]
Abstract
The integration of artificial intelligence such as ChatGPT into educational frameworks marks a pivotal transformation in teaching. This quasi-experimental study, conducted in September 2023, aimed to evaluate the effects of artificial intelligence-assisted learning on nursing students' ethical decision-making and clinical reasoning. A total of 99 nursing students enrolled in a pediatric nursing course were randomly divided into two groups: an experimental group that utilized ChatGPT and a control group that used traditional textbooks. The Mann-Whitney U test was employed to assess differences between the groups in two primary outcomes: (a) ethical standards, focusing on the understanding and applying ethical principles, and (b) nursing processes, emphasizing critical thinking skills and integrating evidence-based knowledge. The control group outperformed the experimental group in ethical standards and demonstrated better clinical reasoning in nursing processes. Reflective essays revealed that the experimental group reported lower reliability but higher time efficiency. Despite artificial intelligence's ability to offer diverse perspectives, the findings highlight that educators must supplement artificial intelligence technology with strategies that enhance critical thinking, careful data selection, and source verification. This study suggests a hybrid educational approach combining artificial intelligence with traditional learning methods to bolster nursing students' decision-making processes and clinical reasoning skills.
Collapse
|
5
|
Edwards DJ. A functional contextual, observer-centric, quantum mechanical, and neuro-symbolic approach to solving the alignment problem of artificial general intelligence: safe AI through intersecting computational psychological neuroscience and LLM architecture for emergent theory of mind. Front Comput Neurosci 2024; 18:1395901. [PMID: 39175519 PMCID: PMC11338881 DOI: 10.3389/fncom.2024.1395901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 07/04/2024] [Indexed: 08/24/2024] Open
Abstract
There have been impressive advancements in the field of natural language processing (NLP) in recent years, largely driven by innovations in the development of transformer-based large language models (LLM) that utilize "attention." This approach employs masked self-attention to establish (via similarly) different positions of tokens (words) within an inputted sequence of tokens to compute the most appropriate response based on its training corpus. However, there is speculation as to whether this approach alone can be scaled up to develop emergent artificial general intelligence (AGI), and whether it can address the alignment of AGI values with human values (called the alignment problem). Some researchers exploring the alignment problem highlight three aspects that AGI (or AI) requires to help resolve this problem: (1) an interpretable values specification; (2) a utility function; and (3) a dynamic contextual account of behavior. Here, a neurosymbolic model is proposed to help resolve these issues of human value alignment in AI, which expands on the transformer-based model for NLP to incorporate symbolic reasoning that may allow AGI to incorporate perspective-taking reasoning (i.e., resolving the need for a dynamic contextual account of behavior through deictics) as defined by a multilevel evolutionary and neurobiological framework into a functional contextual post-Skinnerian model of human language called "Neurobiological and Natural Selection Relational Frame Theory" (N-Frame). It is argued that this approach may also help establish a comprehensible value scheme, a utility function by expanding the expected utility equation of behavioral economics to consider functional contextualism, and even an observer (or witness) centric model for consciousness. Evolution theory, subjective quantum mechanics, and neuroscience are further aimed to help explain consciousness, and possible implementation within an LLM through correspondence to an interface as suggested by N-Frame. This argument is supported by the computational level of hypergraphs, relational density clusters, a conscious quantum level defined by QBism, and real-world applied level (human user feedback). It is argued that this approach could enable AI to achieve consciousness and develop deictic perspective-taking abilities, thereby attaining human-level self-awareness, empathy, and compassion toward others. Importantly, this consciousness hypothesis can be directly tested with a significance of approximately 5-sigma significance (with a 1 in 3.5 million probability that any identified AI-conscious observations in the form of a collapsed wave form are due to chance factors) through double-slit intent-type experimentation and visualization procedures for derived perspective-taking relational frames. Ultimately, this could provide a solution to the alignment problem and contribute to the emergence of a theory of mind (ToM) within AI.
Collapse
Affiliation(s)
- Darren J. Edwards
- Department of Public Health, Swansea University, Swansea, United Kingdom
| |
Collapse
|
6
|
Keyßer G, Pfeil A, Reuß-Borst M, Frohne I, Schultz O, Sander O. [What is the potential of ChatGPT for qualified patient information? : Attempt of a structured analysis on the basis of a survey regarding complementary and alternative medicine (CAM) in rheumatology]. Z Rheumatol 2024:10.1007/s00393-024-01535-6. [PMID: 38985176 DOI: 10.1007/s00393-024-01535-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/18/2024] [Indexed: 07/11/2024]
Abstract
INTRODUCTION The chatbot ChatGPT represents a milestone in the interaction between humans and large databases that are accessible via the internet. It facilitates the answering of complex questions by enabling a communication in everyday language. Therefore, it is a potential source of information for those who are affected by rheumatic diseases. The aim of our investigation was to find out whether ChatGPT (version 3.5) is capable of giving qualified answers regarding the application of specific methods of complementary and alternative medicine (CAM) in three rheumatic diseases: rheumatoid arthritis (RA), systemic lupus erythematosus (SLE) and granulomatosis with polyangiitis (GPA). In addition, it was investigated how the answers of the chatbot were influenced by the wording of the question. METHODS The questioning of ChatGPT was performed in three parts. Part A consisted of an open question regarding the best way of treatment of the respective disease. In part B, the questions were directed towards possible indications for the application of CAM in general in one of the three disorders. In part C, the chatbot was asked for specific recommendations regarding one of three CAM methods: homeopathy, ayurvedic medicine and herbal medicine. Questions in parts B and C were expressed in two modifications: firstly, it was asked whether the specific CAM was applicable at all in certain rheumatic diseases. The second question asked which procedure of the respective CAM method worked best in the specific disease. The validity of the answers was checked by using the ChatGPT reliability score, a Likert scale ranging from 1 (lowest validity) to 7 (highest validity). RESULTS The answers to the open questions of part A had the highest validity. In parts B and C, ChatGPT suggested a variety of CAM applications that lacked scientific evidence. The validity of the answers depended on the wording of the questions. If the question suggested the inclination to apply a certain CAM, the answers often lacked the information of missing evidence and were graded with lower score values. CONCLUSION The answers of ChatGPT (version 3.5) regarding the applicability of CAM in selected rheumatic diseases are not convincingly based on scientific evidence. In addition, the wording of the questions affects the validity of the information. Currently, an uncritical application of ChatGPT as an instrument for patient information cannot be recommended.
Collapse
Affiliation(s)
- Gernot Keyßer
- Klinik und Poliklinik für Innere Medizin II, Universitätsklinikum Halle, Ernst-Grube-Str. 40, 06120, Halle (Saale), Deutschland.
| | - Alexander Pfeil
- Klinik für Innere Medizin III, Universitätsklinikum Jena, Friedrich-Schiller-Universität Jena, Jena, Deutschland
| | | | - Inna Frohne
- Privatpraxis für Rheumatologie, Essen, Deutschland
| | - Olaf Schultz
- Abteilung Rheumatologie, ACURA Kliniken Baden-Baden, Baden-Baden, Deutschland
| | - Oliver Sander
- Klinik für Rheumatologie, Universitätsklinikum Düsseldorf, Düsseldorf, Deutschland
| |
Collapse
|
7
|
Ertürk A. Deep 3D histology powered by tissue clearing, omics and AI. Nat Methods 2024; 21:1153-1165. [PMID: 38997593 DOI: 10.1038/s41592-024-02327-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 05/28/2024] [Indexed: 07/14/2024]
Abstract
To comprehensively understand tissue and organism physiology and pathophysiology, it is essential to create complete three-dimensional (3D) cellular maps. These maps require structural data, such as the 3D configuration and positioning of tissues and cells, and molecular data on the constitution of each cell, spanning from the DNA sequence to protein expression. While single-cell transcriptomics is illuminating the cellular and molecular diversity across species and tissues, the 3D spatial context of these molecular data is often overlooked. Here, I discuss emerging 3D tissue histology techniques that add the missing third spatial dimension to biomedical research. Through innovations in tissue-clearing chemistry, labeling and volumetric imaging that enhance 3D reconstructions and their synergy with molecular techniques, these technologies will provide detailed blueprints of entire organs or organisms at the cellular level. Machine learning, especially deep learning, will be essential for extracting meaningful insights from the vast data. Further development of integrated structural, molecular and computational methods will unlock the full potential of next-generation 3D histology.
Collapse
Affiliation(s)
- Ali Ertürk
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany.
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians University, Munich, Germany.
- School of Medicine, Koç University, İstanbul, Turkey.
- Deep Piction GmbH, Munich, Germany.
| |
Collapse
|
8
|
Sosa-Holwerda A, Park OH, Albracht-Schulte K, Niraula S, Thompson L, Oldewage-Theron W. The Role of Artificial Intelligence in Nutrition Research: A Scoping Review. Nutrients 2024; 16:2066. [PMID: 38999814 PMCID: PMC11243505 DOI: 10.3390/nu16132066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 06/20/2024] [Accepted: 06/24/2024] [Indexed: 07/14/2024] Open
Abstract
Artificial intelligence (AI) refers to computer systems doing tasks that usually need human intelligence. AI is constantly changing and is revolutionizing the healthcare field, including nutrition. This review's purpose is four-fold: (i) to investigate AI's role in nutrition research; (ii) to identify areas in nutrition using AI; (iii) to understand AI's future potential impact; (iv) to investigate possible concerns about AI's use in nutrition research. Eight databases were searched: PubMed, Web of Science, EBSCO, Agricola, Scopus, IEEE Explore, Google Scholar and Cochrane. A total of 1737 articles were retrieved, of which 22 were included in the review. Article screening phases included duplicates elimination, title-abstract selection, full-text review, and quality assessment. The key findings indicated AI's role in nutrition is at a developmental stage, focusing mainly on dietary assessment and less on malnutrition prediction, lifestyle interventions, and diet-related diseases comprehension. Clinical research is needed to determine AI's intervention efficacy. The ethics of AI use, a main concern, remains unresolved and needs to be considered for collateral damage prevention to certain populations. The studies' heterogeneity in this review limited the focus on specific nutritional areas. Future research should prioritize specialized reviews in nutrition and dieting for a deeper understanding of AI's potential in human nutrition.
Collapse
Affiliation(s)
- Andrea Sosa-Holwerda
- Department of Nutritional Sciences, Texas Tech University, Lubbock, TX 79409, USA
| | - Oak-Hee Park
- College of Health & Human Sciences, Texas Tech University, Lubbock, TX 79409, USA
| | | | - Surya Niraula
- Department of Nutritional Sciences, Texas Tech University, Lubbock, TX 79409, USA
| | - Leslie Thompson
- Department of Animal and Food Sciences, Texas Tech University, Lubbock, TX 79409, USA
| | | |
Collapse
|
9
|
Lewandowski M, Łukowicz P, Świetlik D, Barańska-Rybak W. ChatGPT-3.5 and ChatGPT-4 dermatological knowledge level based on the Specialty Certificate Examination in Dermatology. Clin Exp Dermatol 2024; 49:686-691. [PMID: 37540015 DOI: 10.1093/ced/llad255] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 07/17/2023] [Accepted: 07/31/2023] [Indexed: 08/05/2023]
Abstract
BACKGROUND The global use of artificial intelligence (AI) has the potential to revolutionize the healthcare industry. Despite the fact that AI is becoming more popular, there is still a lack of evidence on its use in dermatology. OBJECTIVES To determine the capacity of ChatGPT-3.5 and ChatGPT-4 to support dermatology knowledge and clinical decision-making in medical practice. METHODS Three Specialty Certificate Examination in Dermatology tests, in English and Polish, consisting of 120 single-best-answer, multiple-choice questions each, were used to assess the performance of ChatGPT-3.5 and ChatGPT-4. RESULTS ChatGPT-4 exceeded the 60% pass rate in every performed test, with a minimum of 80% and 70% correct answers for the English and Polish versions, respectively. ChatGPT-4 performed significantly better on each exam (P < 0.01), regardless of language, compared with ChatGPT-3.5. Furthermore, ChatGPT-4 answered clinical picture-type questions with an average accuracy of 93.0% and 84.2% for questions in English and Polish, respectively. The difference between the tests in Polish and English were not significant; however, ChatGPT-3.5 and ChatGPT-4 performed better overall in English than in Polish by an average of 8 percentage points for each test. Incorrect ChatGPT answers were highly correlated with a lower difficulty index, denoting questions of higher difficulty in most of the tests (P < 0.05). CONCLUSIONS The dermatology knowledge level of ChatGPT was high, and ChatGPT-4 performed significantly better than ChatGPT-3.5. Although the use of ChatGPT will not replace a doctor's final decision, physicians should support the development of AI in dermatology to raise the standards of medical care.
Collapse
Affiliation(s)
- Miłosz Lewandowski
- Department of Dermatology, Venereology and Allergology, Faculty of Medicine
| | - Paweł Łukowicz
- Division of Biostatistics and Neural Networks, Medical University of Gdansk, Gdansk, Poland
| | - Dariusz Świetlik
- Division of Biostatistics and Neural Networks, Medical University of Gdansk, Gdansk, Poland
| | | |
Collapse
|
10
|
Rao SJ, Isath A, Krishnan P, Tangsrivimol JA, Virk HUH, Wang Z, Glicksberg BS, Krittanawong C. ChatGPT: A Conceptual Review of Applications and Utility in the Field of Medicine. J Med Syst 2024; 48:59. [PMID: 38836893 DOI: 10.1007/s10916-024-02075-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 05/07/2024] [Indexed: 06/06/2024]
Abstract
Artificial Intelligence, specifically advanced language models such as ChatGPT, have the potential to revolutionize various aspects of healthcare, medical education, and research. In this narrative review, we evaluate the myriad applications of ChatGPT in diverse healthcare domains. We discuss its potential role in clinical decision-making, exploring how it can assist physicians by providing rapid, data-driven insights for diagnosis and treatment. We review the benefits of ChatGPT in personalized patient care, particularly in geriatric care, medication management, weight loss and nutrition, and physical activity guidance. We further delve into its potential to enhance medical research, through the analysis of large datasets, and the development of novel methodologies. In the realm of medical education, we investigate the utility of ChatGPT as an information retrieval tool and personalized learning resource for medical students and professionals. There are numerous promising applications of ChatGPT that will likely induce paradigm shifts in healthcare practice, education, and research. The use of ChatGPT may come with several benefits in areas such as clinical decision making, geriatric care, medication management, weight loss and nutrition, physical fitness, scientific research, and medical education. Nevertheless, it is important to note that issues surrounding ethics, data privacy, transparency, inaccuracy, and inadequacy persist. Prior to widespread use in medicine, it is imperative to objectively evaluate the impact of ChatGPT in a real-world setting using a risk-based approach.
Collapse
Affiliation(s)
- Shiavax J Rao
- Department of Medicine, MedStar Union Memorial Hospital, Baltimore, MD, USA
| | - Ameesh Isath
- Department of Cardiology, Westchester Medical Center and New York Medical College, Valhalla, NY, USA
| | - Parvathy Krishnan
- Department of Pediatrics, Westchester Medical Center and New York Medical College, Valhalla, NY, USA
| | - Jonathan A Tangsrivimol
- Division of Neurosurgery, Department of Surgery, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, 10210, Thailand
- Department of Neurological Surgery, Weill Cornell Medicine Brain and Spine Center, New York, NY, 10022, USA
| | - Hafeez Ul Hassan Virk
- Harrington Heart & Vascular Institute, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Zhen Wang
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
- Division of Health Care Policy and Research, Department of Health Sciences Research, Mayo Clinic, Rochester, MN, USA
| | - Benjamin S Glicksberg
- Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Chayakrit Krittanawong
- Cardiology Division, NYU Langone Health and NYU School of Medicine, 550 First Avenue, New York, NY, 10016, USA.
| |
Collapse
|
11
|
Day TG, Budd S, Tan J, Matthew J, Skelton E, Jowett V, Lloyd D, Gomez A, Hajnal JV, Razavi R, Kainz B, Simpson JM. Prenatal diagnosis of hypoplastic left heart syndrome on ultrasound using artificial intelligence: How does performance compare to a current screening programme? Prenat Diagn 2024; 44:717-724. [PMID: 37776084 DOI: 10.1002/pd.6445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/31/2023] [Accepted: 09/14/2023] [Indexed: 10/01/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to improve prenatal detection of congenital heart disease. We analysed the performance of the current national screening programme in detecting hypoplastic left heart syndrome (HLHS) to compare with our own AI model. METHODS Current screening programme performance was calculated from local and national sources. AI models were trained using four-chamber ultrasound views of the fetal heart, using a ResNet classifier. RESULTS Estimated current fetal screening programme sensitivity and specificity for HLHS were 94.3% and 99.985%, respectively. Depending on calibration, AI models to detect HLHS were either highly sensitive (sensitivity 100%, specificity 94.0%) or highly specific (sensitivity 93.3%, specificity 100%). Our analysis suggests that our highly sensitive model would generate 45,134 screen positive results for a gain of 14 additional HLHS cases. Our highly specific model would be associated with two fewer detected HLHS cases, and 118 fewer false positives. CONCLUSION If used independently, our AI model performance is slightly worse than the performance level of the current screening programme in detecting HLHS, and this performance is likely to deteriorate further when used prospectively. This demonstrates that collaboration between humans and AI will be key for effective future clinical use.
Collapse
Affiliation(s)
- Thomas G Day
- Department of Congenital Heart Disease, Evelina Children's Healthcare, Guy's and St Thomas' NHS Foundation Trust, London, UK
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Samuel Budd
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Jeremy Tan
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Jacqueline Matthew
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Emily Skelton
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- School of Health Sciences, University of London, London, UK
| | - Victoria Jowett
- Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK
| | - David Lloyd
- Department of Congenital Heart Disease, Evelina Children's Healthcare, Guy's and St Thomas' NHS Foundation Trust, London, UK
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Alberto Gomez
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Jo V Hajnal
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Reza Razavi
- Department of Congenital Heart Disease, Evelina Children's Healthcare, Guy's and St Thomas' NHS Foundation Trust, London, UK
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Bernhard Kainz
- Department of Computing, Imperial College London, London, UK
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany
| | - John M Simpson
- Department of Congenital Heart Disease, Evelina Children's Healthcare, Guy's and St Thomas' NHS Foundation Trust, London, UK
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
12
|
Ivanova M, Pescia C, Trapani D, Venetis K, Frascarelli C, Mane E, Cursano G, Sajjadi E, Scatena C, Cerbelli B, d’Amati G, Porta FM, Guerini-Rocco E, Criscitiello C, Curigliano G, Fusco N. Early Breast Cancer Risk Assessment: Integrating Histopathology with Artificial Intelligence. Cancers (Basel) 2024; 16:1981. [PMID: 38893102 PMCID: PMC11171409 DOI: 10.3390/cancers16111981] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 05/13/2024] [Accepted: 05/17/2024] [Indexed: 06/21/2024] Open
Abstract
Effective risk assessment in early breast cancer is essential for informed clinical decision-making, yet consensus on defining risk categories remains challenging. This paper explores evolving approaches in risk stratification, encompassing histopathological, immunohistochemical, and molecular biomarkers alongside cutting-edge artificial intelligence (AI) techniques. Leveraging machine learning, deep learning, and convolutional neural networks, AI is reshaping predictive algorithms for recurrence risk, thereby revolutionizing diagnostic accuracy and treatment planning. Beyond detection, AI applications extend to histological subtyping, grading, lymph node assessment, and molecular feature identification, fostering personalized therapy decisions. With rising cancer rates, it is crucial to implement AI to accelerate breakthroughs in clinical practice, benefiting both patients and healthcare providers. However, it is important to recognize that while AI offers powerful automation and analysis tools, it lacks the nuanced understanding, clinical context, and ethical considerations inherent to human pathologists in patient care. Hence, the successful integration of AI into clinical practice demands collaborative efforts between medical experts and computational pathologists to optimize patient outcomes.
Collapse
Affiliation(s)
- Mariia Ivanova
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
| | - Carlo Pescia
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
| | - Dario Trapani
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, 20141 Milan, Italy; (D.T.); (C.C.); (G.C.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Konstantinos Venetis
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
| | - Chiara Frascarelli
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Eltjona Mane
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
| | - Giulia Cursano
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Elham Sajjadi
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Cristian Scatena
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy;
| | - Bruna Cerbelli
- Department of Medical-Surgical Sciences and Biotechnologies, Sapienza University of Rome, 00185 Rome, Italy;
| | - Giulia d’Amati
- Department of Radiological, Oncological and Pathological Sciences, Sapienza University of Rome, 00185 Rome, Italy;
| | - Francesca Maria Porta
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
| | - Elena Guerini-Rocco
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Carmen Criscitiello
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, 20141 Milan, Italy; (D.T.); (C.C.); (G.C.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Giuseppe Curigliano
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, 20141 Milan, Italy; (D.T.); (C.C.); (G.C.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Nicola Fusco
- Division of Pathology, European Institute of Oncology IRCCS, 20141 Milan, Italy; (M.I.); (C.P.); (K.V.); (C.F.); (E.M.); (G.C.); (E.S.); (F.M.P.); (E.G.-R.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| |
Collapse
|
13
|
Aharoni E, Fernandes S, Brady DJ, Alexander C, Criner M, Queen K, Rando J, Nahmias E, Crespo V. Attributions toward artificial agents in a modified Moral Turing Test. Sci Rep 2024; 14:8458. [PMID: 38688951 PMCID: PMC11061136 DOI: 10.1038/s41598-024-58087-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 03/25/2024] [Indexed: 05/02/2024] Open
Abstract
Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24-28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI's moral reasoning as superior in quality to humans' along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans' raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.
Collapse
Affiliation(s)
- Eyal Aharoni
- Department of Psychology, Georgia State University, Atlanta, GA, USA.
- Department of Philosophy, Georgia State University, Atlanta, GA, USA.
- Neuroscience Institute, Georgia State University, Atlanta, GA, USA.
| | | | - Daniel J Brady
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | - Caelan Alexander
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | - Michael Criner
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | - Kara Queen
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | | | - Eddy Nahmias
- Department of Philosophy, Georgia State University, Atlanta, GA, USA
- Neuroscience Institute, Georgia State University, Atlanta, GA, USA
| | - Victor Crespo
- Department of Philosophy, Duke University, Durham, NC, USA
| |
Collapse
|
14
|
Maccaro A, Stokes K, Statham L, He L, Williams A, Pecchia L, Piaggio D. Clearing the Fog: A Scoping Literature Review on the Ethical Issues Surrounding Artificial Intelligence-Based Medical Devices. J Pers Med 2024; 14:443. [PMID: 38793025 PMCID: PMC11121798 DOI: 10.3390/jpm14050443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 04/12/2024] [Accepted: 04/16/2024] [Indexed: 05/26/2024] Open
Abstract
The use of AI in healthcare has sparked much debate among philosophers, ethicists, regulators and policymakers who raised concerns about the implications of such technologies. The presented scoping review captures the progression of the ethical and legal debate and the proposed ethical frameworks available concerning the use of AI-based medical technologies, capturing key themes across a wide range of medical contexts. The ethical dimensions are synthesised in order to produce a coherent ethical framework for AI-based medical technologies, highlighting how transparency, accountability, confidentiality, autonomy, trust and fairness are the top six recurrent ethical issues. The literature also highlighted how it is essential to increase ethical awareness through interdisciplinary research, such that researchers, AI developers and regulators have the necessary education/competence or networks and tools to ensure proper consideration of ethical matters in the conception and design of new AI technologies and their norms. Interdisciplinarity throughout research, regulation and implementation will help ensure AI-based medical devices are ethical, clinically effective and safe. Achieving these goals will facilitate successful translation of AI into healthcare systems, which currently is lagging behind other sectors, to ensure timely achievement of health benefits to patients and the public.
Collapse
Affiliation(s)
- Alessia Maccaro
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| | - Katy Stokes
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| | - Laura Statham
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
- Warwick Medical School, University of Warwick, Coventry CV4 7AL, UK
| | - Lucas He
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
- Faculty of Engineering, Imperial College, London SW7 1AY, UK
| | - Arthur Williams
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| | - Leandro Pecchia
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
- Intelligent Technologies for Health and Well-Being: Sustainable Design, Management and Evaluation, Faculty of Engineering, Università Campus Bio-Medico Roma, Via Alvaro del Portillo, 21, 00128 Rome, Italy
| | - Davide Piaggio
- Applied Biomedical Signal Processing Intelligent eHealth Lab, School of Engineering, University of Warwick, Coventry CV4 7AL, UK; (A.M.); (K.S.); (L.S.); (L.H.); (A.W.); (L.P.)
| |
Collapse
|
15
|
Verhoeven R, Hulscher JBF. Editorial: Artificial intelligence and machine learning in pediatric surgery. Front Pediatr 2024; 12:1404600. [PMID: 38659697 PMCID: PMC11042026 DOI: 10.3389/fped.2024.1404600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 04/01/2024] [Indexed: 04/26/2024] Open
Affiliation(s)
- Rosa Verhoeven
- Department of Surgery, Division of Pediatric Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
- Department of Neonatology, Beatrix Children’s Hospital, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Jan B. F. Hulscher
- Department of Surgery, Division of Pediatric Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| |
Collapse
|
16
|
Sarangi PK, Narayan RK, Mohakud S, Vats A, Sahani D, Mondal H. Assessing the Capability of ChatGPT, Google Bard, and Microsoft Bing in Solving Radiology Case Vignettes. Indian J Radiol Imaging 2024; 34:276-282. [PMID: 38549897 PMCID: PMC10972658 DOI: 10.1055/s-0043-1777746] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background The field of radiology relies on accurate interpretation of medical images for effective diagnosis and patient care. Recent advancements in artificial intelligence (AI) and natural language processing have sparked interest in exploring the potential of AI models in assisting radiologists. However, limited research has been conducted to assess the performance of AI models in radiology case interpretation, particularly in comparison to human experts. Objective This study aimed to evaluate the performance of ChatGPT, Google Bard, and Bing in solving radiology case vignettes (Fellowship of the Royal College of Radiologists 2A [FRCR2A] examination style questions) by comparing their responses to those provided by two radiology residents. Methods A total of 120 multiple-choice questions based on radiology case vignettes were formulated according to the pattern of FRCR2A examination. The questions were presented to ChatGPT, Google Bard, and Bing. Two residents wrote the examination with the same questions in 3 hours. The responses generated by the AI models were collected and compared to the answer keys and explanation of the answers was rated by the two radiologists. A cutoff of 60% was set as the passing score. Results The two residents (63.33 and 57.5%) outperformed the three AI models: Bard (44.17%), Bing (53.33%), and ChatGPT (45%), but only one resident passed the examination. The response patterns among the five respondents were significantly different ( p = 0.0117). In addition, the agreement among the generative AI models was significant (intraclass correlation coefficient [ICC] = 0.628), but there was no agreement between the residents (Kappa = -0.376). The explanation of generative AI models in support of answer was 44.72% accurate. Conclusion Humans exhibited superior accuracy compared to the AI models, showcasing a stronger comprehension of the subject matter. All three AI models included in the study could not achieve the minimum percentage needed to pass an FRCR2A examination. However, generative AI models showed significant agreement in their answers where the residents exhibited low agreement, highlighting a lack of consistency in their responses.
Collapse
Affiliation(s)
- Pradosh Kumar Sarangi
- Department of Radiodiagnosis, All India Institute of Medical Sciences, Deoghar, Jharkhand, India
| | - Ravi Kant Narayan
- Department of Anatomy, ESIC Medical College & Hospital, Bihta, Patna, Bihar, India
| | - Sudipta Mohakud
- Department of Radiodiagnosis, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Aditi Vats
- Department of Radiodiagnosis, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Debabrata Sahani
- Department of Radiodiagnosis, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Himel Mondal
- Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India
| |
Collapse
|
17
|
Islam A, Banerjee A, Wati SM, Banerjee S, Shrivastava D, Srivastava KC. Utilizing Artificial Intelligence Application for Diagnosis of Oral Lesions and Assisting Young Oral Histopathologist in Deriving Diagnosis from Provided Features - A Pilot study. JOURNAL OF PHARMACY AND BIOALLIED SCIENCES 2024; 16:S1136-S1139. [PMID: 38882904 PMCID: PMC11174333 DOI: 10.4103/jpbs.jpbs_1287_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 01/10/2024] [Accepted: 01/10/2024] [Indexed: 06/18/2024] Open
Abstract
Background AI in healthcare services is advancing every day, with a focus on uprising cognitive capabilities. Higher cognitive functions in AI entail performing intricate processes like decision-making, problem-solving, perception, and reasoning. This advanced cognition surpasses basic data handling, encompassing skills to grasp ideas, understand and apply information contextually, and derive novel insights from previous experiences and acquired knowledge. ChatGPT, a natural language processing model, exemplifies this evolution by engaging in conversations with humans, furnishing responses to inquiries. Objective We aimed to understand the capability of ChatGPT in solving doubts pertaining to symptoms and histological features related to subject of oral pathology. The study's objective is to evaluate ChatGPT's effectiveness in answering questions pertaining to diagnoses. Methods This cross-sectional study was done using an AI-based ChatGPT application that provides free service for research and learning purposes. The current version of ChatGPT3.5 was used to obtain responses for a total of 25 queries. These randomly asked questions were based on basic queries from patient aspect and early oral histopathologists. These responses were obtained and stored for further processing. The responses were evaluated by five experienced pathologists on a four point liekart scale. The score were further subjected for deducing kappa values for reliability. Result & Statistical Analysis A total of 25 queries were solved by the program in the shortest possible time for an answer. The sensitivity and specificity of the methods and the responses were represented using frequency and percentages. Both the responses were analysed and were statistically significant based on the measurement of kappa values. Conclusion The proficiency of ChatGPT in handling intricate reasoning queries within pathology demonstrated a noteworthy level of relational accuracy. Consequently, its text output created coherent links between elements, producing meaningful responses. This suggests that scholars or students can rely on this program to address reasoning-based inquiries. Nevertheless, considering the continual advancements in the program's development, further research is essential to determine its accuracy levels in future versions.
Collapse
Affiliation(s)
- Atikul Islam
- Department of Oral and Maxillofacial Pathology, Awadh Dental College and Hospital, Jamshedpur, Jharkhand, India
| | - Abhishek Banerjee
- Department of Oral and Maxillofacial Pathology and Oral Microbiology, Awadh Dental College and Hospital, Jamshedpur, Jharkhand, India
- Adjunct Faculty, Oral and Maxillofacial Pathology, Faculty of Dental Medicine, Universitas Airlangga, Indonesia
| | - Sisca Meida Wati
- Oral and Maxillofacial Pathology, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, East Java, Indonesia
| | - Sumita Banerjee
- Oral and Maxillofacial Pathology, Dental College, RIMS, Imphal, Manipur, India
| | - Deepti Shrivastava
- Division of Periodontics, Department of Preventive Dental Sciences, College of Dentistry, Jouf University, Sakaka, Saudi Arabia
- Department of Periodontics, Saveetha Dental College, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, Tami Nadu, India
| | - Kumar Chandan Srivastava
- Division of Oral Medicine and Radiology, Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Jouf University, Sakaka, Saudi Arabia
| |
Collapse
|
18
|
Sohail SS, Madsen DØ, Farhat F, Alam MA. ChatGPT and Vaccines: Can AI Chatbots Boost Awareness and Uptake? Ann Biomed Eng 2024; 52:446-450. [PMID: 37428336 DOI: 10.1007/s10439-023-03305-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Accepted: 07/03/2023] [Indexed: 07/11/2023]
Abstract
The global COVID-19 pandemic has affected all spheres of human life, resulting in millions of deaths and overwhelming medical facilities. Moreover, the world has witnessed great financial hardship because of job losses resulting in economic havoc. Many sections of society have contributed in different ways to slow the spread of the virus and protect public health. For example, medical scientists are praised for their efforts to develop COVID-19 vaccines. Clinical trials have shown that the COVID-19 vaccines are highly effective in preventing symptomatic COVID-19 infections. However, many people around the world have been hesitant to get vaccinated. Vaccine misconceptions have emerged and increased due to a combination of factors, including the availability of information on the Internet and the influence of celebrities and opinion leaders. In this context, we have analyzed ChatGPT responses to relevant queries on vaccine misconceptions. The positive responses and supportive opinions provided by the AI chatbot could be instrumental in shaping people's perceptions of vaccines and in encouraging users to get vaccinated and reduce misconceptions.
Collapse
Affiliation(s)
- Shahab Saquib Sohail
- Department of Computer Science and Engineering, School of Engineering Sciences and Technology, Jamia Hamdard, New Delhi, 110062, India
| | - Dag Øivind Madsen
- USN School of Business, University of South-Eastern Norway, 3511, Hønefoss, Norway.
| | - Faiza Farhat
- Department of Zoology, Aligarh Muslim University, Aligarh, U.P., 202002, India
| | - M Afshar Alam
- Department of Computer Science and Engineering, School of Engineering Sciences and Technology, Jamia Hamdard, New Delhi, 110062, India
| |
Collapse
|
19
|
Çiftci N, Sarman A, Yıldız M, Çiftci K. Use of ChatGPT in health: benefits, hazards, and recommendations. Public Health 2024; 228:e1-e2. [PMID: 38346914 DOI: 10.1016/j.puhe.2023.12.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/11/2023] [Accepted: 12/28/2023] [Indexed: 03/16/2024]
Affiliation(s)
- N Çiftci
- Department of Nursing, Faculty of Health Sciences, Muş Alparslan University, Muş, Turkey
| | - A Sarman
- Department of Pediatric Nursing, Faculty of Health Science, Bingöl University, Bingöl, Turkey.
| | - M Yıldız
- Department of Midwifery, Faculty of Health Science, Sakarya University, Sakarya, Turkey
| | - K Çiftci
- Department of Medical Services and Techniques, Vocational School of Health Services, Muş Alparslan University, Muş, Turkey
| |
Collapse
|
20
|
Kacena MA, Plotkin LI, Fehrenbacher JC. The Use of Artificial Intelligence in Writing Scientific Review Articles. Curr Osteoporos Rep 2024; 22:115-121. [PMID: 38227177 PMCID: PMC10912250 DOI: 10.1007/s11914-023-00852-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/21/2023] [Indexed: 01/17/2024]
Abstract
PURPOSE OF REVIEW With the recent explosion in the use of artificial intelligence (AI) and specifically ChatGPT, we sought to determine whether ChatGPT could be used to assist in writing credible, peer-reviewed, scientific review articles. We also sought to assess, in a scientific study, the advantages and limitations of using ChatGPT for this purpose. To accomplish this, 3 topics of importance in musculoskeletal research were selected: (1) the intersection of Alzheimer's disease and bone; (2) the neural regulation of fracture healing; and (3) COVID-19 and musculoskeletal health. For each of these topics, 3 approaches to write manuscript drafts were undertaken: (1) human only; (2) ChatGPT only (AI-only); and (3) combination approach of #1 and #2 (AI-assisted). Articles were extensively fact checked and edited to ensure scientific quality, resulting in final manuscripts that were significantly different from the original drafts. Numerous parameters were measured throughout the process to quantitate advantages and disadvantages of approaches. RECENT FINDINGS Overall, use of AI decreased the time spent to write the review article, but required more extensive fact checking. With the AI-only approach, up to 70% of the references cited were found to be inaccurate. Interestingly, the AI-assisted approach resulted in the highest similarity indices suggesting a higher likelihood of plagiarism. Finally, although the technology is rapidly changing, at the time of study, ChatGPT 4.0 had a cutoff date of September 2021 rendering identification of recent articles impossible. Therefore, all literature published past the cutoff date was manually provided to ChatGPT, rendering approaches #2 and #3 identical for contemporary citations. As a result, for the COVID-19 and musculoskeletal health topic, approach #2 was abandoned midstream due to the extensive overlap with approach #3. The main objective of this scientific study was to see whether AI could be used in a scientifically appropriate manner to improve the scientific writing process. Indeed, AI reduced the time for writing but had significant inaccuracies. The latter necessitates that AI cannot currently be used alone but could be used with careful oversight by humans to assist in writing scientific review articles.
Collapse
Affiliation(s)
- Melissa A Kacena
- Department of Orthopaedic Surgery, Indiana University School of Medicine, Indianapolis, IN, 46202, USA.
- Department of Anatomy, Cell Biology & Physiology, Indiana University School of Medicine, Indianapolis, IN, 46202, USA.
- Indiana Center for Musculoskeletal Health, Indiana University School of Medicine, Indianapolis, IN, 46202, USA.
- Richard L. Roudebush VA Medical Center, Indianapolis, IN, 46202, USA.
| | - Lilian I Plotkin
- Department of Anatomy, Cell Biology & Physiology, Indiana University School of Medicine, Indianapolis, IN, 46202, USA
- Indiana Center for Musculoskeletal Health, Indiana University School of Medicine, Indianapolis, IN, 46202, USA
- Richard L. Roudebush VA Medical Center, Indianapolis, IN, 46202, USA
| | - Jill C Fehrenbacher
- Indiana Center for Musculoskeletal Health, Indiana University School of Medicine, Indianapolis, IN, 46202, USA.
- Department of Pharmacology and Toxicology, Indiana University School of Medicine, Indianapolis, IN, 46202, USA.
- Stark Neuroscience Research Institute, Indiana University School of Medicine, Indianapolis, IN, 46202, USA.
| |
Collapse
|
21
|
Singh A, Pandey J. Artificial intelligence adoption in extended HR ecosystems: enablers and barriers. An abductive case research. Front Psychol 2024; 14:1339782. [PMID: 38327504 PMCID: PMC10847531 DOI: 10.3389/fpsyg.2023.1339782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 12/22/2023] [Indexed: 02/09/2024] Open
Abstract
Artificial intelligence (AI) has disrupted modern workplaces like never before and has induced digital workstyles. These technological advancements are generating significant interest among HR leaders to embrace AI in human resource management (HRM). Researchers and practitioners are keen to investigate the adoption of AI in HRM and the resultant human-machine collaboration. This study investigates HRM specific factors that enable and inhibit the adoption of AI in extended HR ecosystems and adopts a qualitative case research design with an abductive approach. It studies three well-known Indian companies at different stages of AI adoption in HR functions. This research investigates key enablers such as optimistic and collaborative employees, strong digital leadership, reliable HR data, specialized HR partners, and well-rounded AI ethics. The study also examines barriers to adoption: the inability to have a timely pulse check of employees' emotions, ineffective collaboration of HR employees with digital experts as well as external HR partners, and not embracing AI ethics. This study contributes to the theory by providing a model for AI adoption and proposes additions to the unified theory of acceptance and use of technology in the context of AI adoption in HR ecosystems. The study also contributes to the best-in-class industry HR practices and digital policy formulation to reimagine workplaces, promote harmonious human-AI collaboration, and make workplaces future-ready in the wake of massive digital disruptions.
Collapse
Affiliation(s)
- Antarpreet Singh
- Organizational Behaviour and Human Resource Management Area, Indian Institute of Management Indore, Indore, India
- Human Resource Area, FORE School of Management, New Delhi, India
| | - Jatin Pandey
- Organizational Behaviour and Human Resource Management Area, Indian Institute of Management Indore, Indore, India
| |
Collapse
|
22
|
Tanos P, Yiangou I, Prokopiou G, Kakas A, Tanos V. Gynaecological Artificial Intelligence Diagnostics (GAID) GAID and Its Performance as a Tool for the Specialist Doctor. Healthcare (Basel) 2024; 12:223. [PMID: 38255110 PMCID: PMC10815463 DOI: 10.3390/healthcare12020223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 01/08/2024] [Accepted: 01/11/2024] [Indexed: 01/24/2024] Open
Abstract
BACKGROUND Human-centric artificial intelligence (HCAI) aims to provide support systems that can act as peer companions to an expert in a specific domain, by simulating their way of thinking and decision-making in solving real-life problems. The gynaecological artificial intelligence diagnostics (GAID) assistant is such a system. Based on artificial intelligence (AI) argumentation technology, it was developed to incorporate, as much as possible, a complete representation of the medical knowledge in gynaecology and to become a real-life tool that will practically enhance the quality of healthcare services and reduce stress for the clinician. Our study aimed to evaluate GAIDS' efficacy and accuracy in assisting the working expert gynaecologist during day-to-day clinical practice. METHODS Knowledge-based systems utilize a knowledge base (theory) which holds evidence-based rules ("IF-THEN" statements) that are used to prove whether a conclusion (such as a disease, medication or treatment) is possible or not, given a set of input data. This approach uses argumentation frameworks, where rules act as claims that support a specific decision (arguments) and argue for its dominance over others. The result is a set of admissible arguments which support the final decision and explain its cause. RESULTS Based on seven different subcategories of gynaecological presentations-bleeding, endocrinology, cancer, pelvic pain, urogynaecology, sexually transmitted infections and vulva pathology in fifty patients-GAID demonstrates an average overall closeness accuracy of zero point eighty-seven. Since the system provides explanations for supporting a diagnosis against other possible diseases, this evaluation process further allowed for a learning process of modular improvement in the system of the diagnostic discrepancies between the system and the specialist. CONCLUSIONS GAID successfully demonstrates an average accuracy of zero point eighty-seven when measuring the closeness of the system's diagnosis to that of the senior consultant. The system further provides meaningful and helpful explanations for its diagnoses that can help clinicians to develop an increasing level of trust towards the system. It also provides a practical database, which can be used as a structured history-taking assistant and a friendly, patient record-keeper, while improving precision by providing a full list of differential diagnoses. Importantly, the design and implementation of the system facilitates its continuous development with a set methodology that allows minimal revision of the system in the face of new information. Further large-scale studies are required to evaluate GAID more thoroughly and to identify its limiting boundaries.
Collapse
Affiliation(s)
- Panayiotis Tanos
- Institute of Applied Health Sciences, University of Aberdeen, NHS Grampian, Aberdeen AB24 3FX, UK
| | - Ioannis Yiangou
- Department of Computer Science, University of Cyprus, Nicosia 1678, Cyprus
| | - Giorgos Prokopiou
- Department of Computer Science, University of Cyprus, Nicosia 1678, Cyprus
| | - Antonis Kakas
- Department of Computer Science, University of Cyprus, Nicosia 1678, Cyprus
| | - Vasilios Tanos
- Medical School, Nicosia of University, Nicosia 2408, Cyprus
- Aretaeio Hospital, 55-57 Andreas Avraamides, Strovolos, Nicosia 2024, Cyprus
| |
Collapse
|
23
|
Daher OA, Dabbousi AA, Chamroukh R, Saab AY, Al Ayoubi AR, Salameh P. Artificial Intelligence: Knowledge and Attitude Among Lebanese Medical Students. Cureus 2024; 16:e51466. [PMID: 38298326 PMCID: PMC10829838 DOI: 10.7759/cureus.51466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/01/2024] [Indexed: 02/02/2024] Open
Abstract
Background Artificial intelligence (AI) has taken on a variety of functions in the medical field, and research has proven that it can address complicated issues in various applications. It is unknown whether Lebanese medical students and residents have a detailed understanding of this concept, and little is known about their attitudes toward AI. Aim This study fills a critical gap by revealing the knowledge and attitude of Lebanese medical students toward AI. Methods A multi-centric survey targeting 365 medical students from seven medical schools across Lebanon was conducted to assess their knowledge of and attitudes toward AI in medicine. The survey consists of five sections: the first part includes socio-demographic variables, while the second comprises the 'Medical Artificial Intelligence Readiness Scale' for medical students. The third part focuses on attitudes toward AI in medicine, the fourth assesses understanding of deep learning, and the fifth targets considerations of radiology as a specialization. Results There is a notable awareness of AI among students who are eager to learn about it. Despite this interest, there exists a gap in knowledge regarding deep learning, albeit alongside a positive attitude towards it. Students who are more open to embracing AI technology tend to have a better understanding of AI concepts (p=0.001). Additionally, a higher percentage of students from Mount Lebanon (71.6%) showed an inclination towards using AI compared to Beirut (63.2%) (p=0.03). Noteworthy are the Lebanese University and Saint Joseph University, where the highest proportions of students are willing to integrate AI into the medical field (79.4% and 76.7%, respectively; p=0.001). Conclusion It was concluded that most Lebanese medical students might not necessarily comprehend the core technological ideas of AI and deep learning. This lack of understanding was evident from the substantial amount of misinformation among the students. Consequently, there appears to be a significant demand for the inclusion of AI technologies in Lebanese medical school courses.
Collapse
Affiliation(s)
- Omar A Daher
- Faculty of Medicine, Beirut Arab University, Beirut, LBN
| | | | | | | | - Amir Rabih Al Ayoubi
- Department of General Medicine, Faculty of Medical Sciences, Lebanese University, Beirut, LBN
| | - Pascale Salameh
- Department of Primary Care and Population Health, University of Nicosia Medical School, Nicosia, CYP
- Department of Public Health, Institut National de Santé Publique, d'Épidémiologie Clinique et de Toxicologie (INSPECT-LB), Beirut, LBN
- Department of Pharmacy Practice, Lebanese University, Beirut, LBN
- School of Medicine, Lebanese American University, Beirut, LBN
| |
Collapse
|
24
|
Johnson EA, Dudding KM, Carrington JM. When to err is inhuman: An examination of the influence of artificial intelligence-driven nursing care on patient safety. Nurs Inq 2024; 31:e12583. [PMID: 37459179 DOI: 10.1111/nin.12583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 07/05/2023] [Accepted: 07/09/2023] [Indexed: 01/18/2024]
Abstract
Artificial intelligence, as a nonhuman entity, is increasingly used to inform, direct, or supplant nursing care and clinical decision-making. The boundaries between human- and nonhuman-driven nursing care are blurred with the advent of sensors, wearables, camera devices, and humanoid robots at such an accelerated pace that the critical evaluation of its influence on patient safety has not been fully assessed. Since the pivotal release of To Err is Human, patient safety is being challenged by the dynamic healthcare environment like never before, with nursing at a critical juncture to steer the course of artificial intelligence integration in clinical decision-making. This paper presents an overview of artificial intelligence and its application in healthcare and highlights the implications which affect nursing as a profession, including perspectives on nursing education and training recommendations. The legal and policy challenges which emerge when artificial intelligence influences the risk of clinical errors and safety issues are discussed.
Collapse
Affiliation(s)
- Elizabeth A Johnson
- Mark & Robyn Jones College of Nursing, Montana State University, Bozeman, Montana, USA
| | - Katherine M Dudding
- Department of Family, Community, and Health Systems, UAB School of Nursing, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Jane M Carrington
- Department of Family, Community and Health System Science, University of Florida College of Nursing, Gainesville, Florida, USA
| |
Collapse
|
25
|
Lopes MA, Martins H, Correia T. Artificial intelligence and the future in health policy, planning and management. Int J Health Plann Manage 2024; 39:3-8. [PMID: 37749780 DOI: 10.1002/hpm.3709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2023] Open
Affiliation(s)
| | - Henrique Martins
- Faculdade de Ciências da Saúde, Universidade da Beira Interior and ISCTE-IUL, Lisbon, Portugal
| | - Tiago Correia
- Global Health and Tropical Medicine, GHTM, Associate Laboratory in Translation and Innovation Towards Global Health, LA-REAL, Instituto de Higiene e Medicina Tropical, IHMT, Universidade Nova de Lisboa, UNL, Lisboa, Portugal
| |
Collapse
|
26
|
Malik S, Zaheer S. ChatGPT as an aid for pathological diagnosis of cancer. Pathol Res Pract 2024; 253:154989. [PMID: 38056135 DOI: 10.1016/j.prp.2023.154989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 11/26/2023] [Accepted: 11/27/2023] [Indexed: 12/08/2023]
Abstract
Diagnostic workup of cancer patients is highly reliant on the science of pathology using cytopathology, histopathology, and other ancillary techniques like immunohistochemistry and molecular cytogenetics. Data processing and learning by means of artificial intelligence (AI) has become a spearhead for the advancement of medicine, with pathology and laboratory medicine being no exceptions. ChatGPT, an artificial intelligence (AI)-based chatbot, that was recently launched by OpenAI, is currently a talk of the town, and its role in cancer diagnosis is also being explored meticulously. Pathology workflow by integration of digital slides, implementation of advanced algorithms, and computer-aided diagnostic techniques extend the frontiers of the pathologist's view beyond a microscopic slide and enables effective integration, assimilation, and utilization of knowledge that is beyond human limits and boundaries. Despite of it's numerous advantages in the pathological diagnosis of cancer, it comes with several challenges like integration of digital slides with input language parameters, problems of bias, and legal issues which have to be addressed and worked up soon so that we as a pathologists diagnosing malignancies are on the same band wagon and don't miss the train.
Collapse
Affiliation(s)
- Shaivy Malik
- Department of Pathology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India
| | - Sufian Zaheer
- Department of Pathology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India.
| |
Collapse
|
27
|
YOUSEF M, ALLMER J. Deep learning in bioinformatics. Turk J Biol 2023; 47:366-382. [PMID: 38681776 PMCID: PMC11045206 DOI: 10.55730/1300-0152.2671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/28/2023] [Accepted: 12/18/2023] [Indexed: 05/01/2024] Open
Abstract
Deep learning is a powerful machine learning technique that can learn from large amounts of data using multiple layers of artificial neural networks. This paper reviews some applications of deep learning in bioinformatics, a field that deals with analyzing and interpreting biological data. We first introduce the basic concepts of deep learning and then survey the recent advances and challenges of applying deep learning to various bioinformatics problems, such as genome sequencing, gene expression analysis, protein structure prediction, drug discovery, and disease diagnosis. We also discuss future directions and opportunities for deep learning in bioinformatics. We aim to provide an overview of deep learning so that bioinformaticians applying deep learning models can consider all critical technical and ethical aspects. Thus, our target audience is biomedical informatics researchers who use deep learning models for inference. This review will inspire more bioinformatics researchers to adopt deep-learning methods for their research questions while considering fairness, potential biases, explainability, and accountability.
Collapse
Affiliation(s)
- Malik YOUSEF
- Department of Information Systems, Zefat Academic College, Zefat,
Israel
| | - Jens ALLMER
- Medical Informatics and Bioinformatics, Institute for Measurement Engineering and Sensor Technology, Hochschule Ruhr West, University of Applied Sciences, Mülheim an der Ruhr,
Germany
| |
Collapse
|
28
|
Zawiah M, Al-Ashwal FY, Gharaibeh L, Abu Farha R, Alzoubi KH, Abu Hammour K, Qasim QA, Abrah F. ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students. J Multidiscip Healthc 2023; 16:4099-4110. [PMID: 38116306 PMCID: PMC10729768 DOI: 10.2147/jmdh.s439223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 12/04/2023] [Indexed: 12/21/2023] Open
Abstract
Background The emergence of Chat-Generative Pre-trained Transformer (ChatGPT) by OpenAI has revolutionized AI technology, demonstrating significant potential in healthcare and pharmaceutical education, yet its real-world applicability in clinical training warrants further investigation. Methods A cross-sectional study was conducted between April and May 2023 to assess PharmD students' perceptions, concerns, and experiences regarding the integration of ChatGPT into clinical pharmacy education. The study utilized a convenient sampling method through online platforms and involved a questionnaire with sections on demographics, perceived benefits, concerns, and experience with ChatGPT. Statistical analysis was performed using SPSS, including descriptive and inferential analyses. Results The findings of the study involving 211 PharmD students revealed that the majority of participants were male (77.3%), and had prior experience with artificial intelligence (68.2%). Over two-thirds were aware of ChatGPT. Most students (n= 139, 65.9%) perceived potential benefits in using ChatGPT for various clinical tasks, with concerns including over-reliance, accuracy, and ethical considerations. Adoption of ChatGPT in clinical training varied, with some students not using it at all, while others utilized it for tasks like evaluating drug-drug interactions and developing care plans. Previous users tended to have higher perceived benefits and lower concerns, but the differences were not statistically significant. Conclusion Utilizing ChatGPT in clinical training offers opportunities, but students' lack of trust in it for clinical decisions highlights the need for collaborative human-ChatGPT decision-making. It should complement healthcare professionals' expertise and be used strategically to compensate for human limitations. Further research is essential to optimize ChatGPT's effective integration.
Collapse
Affiliation(s)
- Mohammed Zawiah
- Department of Clinical Pharmacy, College of Pharmacy, Northern Border University, Rafha, 91911, Saudi Arabia
- Department of Pharmacy Practice, College of Clinical Pharmacy, Hodeidah University, Al Hodeidah, Yemen
| | - Fahmi Y Al-Ashwal
- Department of Clinical Pharmacy, College of Pharmacy, Al-Ayen University, Thi-Qar, Iraq
| | - Lobna Gharaibeh
- Pharmacological and Diagnostic Research Center, Faculty of Pharmacy, Al-Ahliyya Amman University, Amman, Jordan
| | - Rana Abu Farha
- Clinical Pharmacy and Therapeutics Department, Faculty of Pharmacy, Applied Science Private University, Amman, Jordan
| | - Karem H Alzoubi
- Department of Pharmacy Practice and Pharmacotherapeutics, University of Sharjah, Sharjah, 27272, United Arab Emirates
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Khawla Abu Hammour
- Department of Clinical Pharmacy and Biopharmaceutics, Faculty of Pharmacy, University of Jordan, Amman, Jordan
| | - Qutaiba A Qasim
- Department of Clinical Pharmacy, College of Pharmacy, Al-Ayen University, Thi-Qar, Iraq
| | - Fahd Abrah
- Discipline of Social and Administrative Pharmacy, School of Pharmaceutical Sciences, Universiti Sains Malaysia, Penang, Malaysia
| |
Collapse
|
29
|
Lanotte F, O’Brien MK, Jayaraman A. AI in Rehabilitation Medicine: Opportunities and Challenges. Ann Rehabil Med 2023; 47:444-458. [PMID: 38093518 PMCID: PMC10767220 DOI: 10.5535/arm.23131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 11/23/2023] [Indexed: 01/03/2024] Open
Abstract
Artificial intelligence (AI) tools are increasingly able to learn from larger and more complex data, thus allowing clinicians and scientists to gain new insights from the information they collect about their patients every day. In rehabilitation medicine, AI can be used to find patterns in huge amounts of healthcare data. These patterns can then be leveraged at the individual level, to design personalized care strategies and interventions to optimize each patient's outcomes. However, building effective AI tools requires many careful considerations about how we collect and handle data, how we train the models, and how we interpret results. In this perspective, we discuss some of the current opportunities and challenges for AI in rehabilitation. We first review recent trends in AI for the screening, diagnosis, treatment, and continuous monitoring of disease or injury, with a special focus on the different types of healthcare data used for these applications. We then examine potential barriers to designing and integrating AI into the clinical workflow, and we propose an end-to-end framework to address these barriers and guide the development of effective AI for rehabilitation. Finally, we present ideas for future work to pave the way for AI implementation in real-world rehabilitation practices.
Collapse
Affiliation(s)
- Francesco Lanotte
- Max Nader Lab for Rehabilitation Technologies and Outcomes Research, Shirley Ryan AbilityLab, Chicago, IL, United States
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States
| | - Megan K. O’Brien
- Max Nader Lab for Rehabilitation Technologies and Outcomes Research, Shirley Ryan AbilityLab, Chicago, IL, United States
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States
| | - Arun Jayaraman
- Max Nader Lab for Rehabilitation Technologies and Outcomes Research, Shirley Ryan AbilityLab, Chicago, IL, United States
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States
| |
Collapse
|
30
|
Bardal M, Chalmers E. Four attributes of intelligence, a thousand questions. BIOLOGICAL CYBERNETICS 2023; 117:407-409. [PMID: 38059989 DOI: 10.1007/s00422-023-00979-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 11/08/2023] [Indexed: 12/08/2023]
Abstract
Jeff Hawkins is one of those rare individuals who speaks the languages of both AI and neuroscience. In his recent book, "A Thousand Brains: A New Theory of Intelligence", Hawkins proposes that current learning algorithms lack four attributes which will be necessary for true machine intelligence. Here we demonstrate that a minimal learning system which satisfies all four points can be constructed using only simple, classical machine learning techniques. We illustrate that such a system falls short of biological intelligence in some important ways. We suggest that Hawkins' list is a useful model, but the "recipe" for true intelligence-if there is one-may not be so easily defined.
Collapse
Affiliation(s)
- Matthieu Bardal
- Department of Mathematics and Computing, Mount Royal University, 4825 Mt Royal Gate SW, Calgary, AB, T3E6K6, Canada
| | - Eric Chalmers
- Department of Mathematics and Computing, Mount Royal University, 4825 Mt Royal Gate SW, Calgary, AB, T3E6K6, Canada.
| |
Collapse
|
31
|
Abuyaman O. Strengths and Weaknesses of ChatGPT Models for Scientific Writing About Medical Vitamin B12: Mixed Methods Study. JMIR Form Res 2023; 7:e49459. [PMID: 37948100 PMCID: PMC10674142 DOI: 10.2196/49459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 08/17/2023] [Accepted: 10/29/2023] [Indexed: 11/12/2023] Open
Abstract
BACKGROUND ChatGPT is a large language model developed by OpenAI designed to generate human-like responses to prompts. OBJECTIVE This study aims to evaluate the ability of GPT-4 to generate scientific content and assist in scientific writing using medical vitamin B12 as the topic. Furthermore, the study will compare the performance of GPT-4 to its predecessor, GPT-3.5. METHODS The study examined responses from GPT-4 and GPT-3.5 to vitamin B12-related prompts, focusing on their quality and characteristics and comparing them to established scientific literature. RESULTS The results indicated that GPT-4 can potentially streamline scientific writing through its ability to edit language and write abstracts, keywords, and abbreviation lists. However, significant limitations of ChatGPT were revealed, including its inability to identify and address bias, inability to include recent information, lack of transparency, and inclusion of inaccurate information. Additionally, it cannot check for plagiarism or provide proper references. The accuracy of GPT-4's answers was found to be superior to GPT-3.5. CONCLUSIONS ChatGPT can be considered a helpful assistant in the writing process but not a replacement for a scientist's expertise. Researchers must remain aware of its limitations and use it appropriately. The improvements in consecutive ChatGPT versions suggest the possibility of overcoming some present limitations in the near future.
Collapse
Affiliation(s)
- Omar Abuyaman
- Department of Medical Laboratory Sciences, Faculty of Applied Medical Sciences, The Hashemite University, Zarqa, 13133, Jordan
| |
Collapse
|
32
|
Abujaber AA, Abd-Alrazaq A, Al-Qudimat AR, Nashwan AJ. A Strengths, Weaknesses, Opportunities, and Threats (SWOT) Analysis of ChatGPT Integration in Nursing Education: A Narrative Review. Cureus 2023; 15:e48643. [PMID: 38090452 PMCID: PMC10711690 DOI: 10.7759/cureus.48643] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/11/2023] [Indexed: 03/25/2024] Open
Abstract
Amidst evolving healthcare demands, nursing education plays a pivotal role in preparing future nurses for complex challenges. Traditional approaches, however, must be revised to meet modern healthcare needs. The ChatGPT, an AI-based chatbot, has garnered significant attention due to its ability to personalize learning experiences, enhance virtual clinical simulations, and foster collaborative learning in nursing education. This review aims to thoroughly assess the potential impact of integrating ChatGPT into nursing education. The hypothesis is that valuable insights can be provided for stakeholders through a comprehensive SWOT analysis examining the strengths, weaknesses, opportunities, and threats associated with ChatGPT. This will enable informed decisions about its integration, prioritizing improved learning outcomes. A thorough narrative literature review was undertaken to provide a solid foundation for the SWOT analysis. The materials included scholarly articles and reports, which ensure the study's credibility and allow for a holistic and unbiased assessment. The analysis identified accessibility, consistency, adaptability, cost-effectiveness, and staying up-to-date as crucial factors influencing the strengths, weaknesses, opportunities, and threats associated with ChatGPT integration in nursing education. These themes provided a framework to understand the potential risks and benefits of integrating ChatGPT into nursing education. This review highlights the importance of responsible and effective use of ChatGPT in nursing education and the need for collaboration among educators, policymakers, and AI developers. Addressing the identified challenges and leveraging the strengths of ChatGPT can lead to improved learning outcomes and enriched educational experiences for students. The findings emphasize the importance of responsibly integrating ChatGPT in nursing education, balancing technological advancement with careful consideration of associated risks, to achieve optimal outcomes.
Collapse
Affiliation(s)
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, QAT
| | - Ahmad R Al-Qudimat
- Department of Public Health, Qatar University, Doha, QAT
- Surgical Research Section, Department of Surgery, Hamad Medical Corporation, Doha, QAT
| | | |
Collapse
|
33
|
Abu Hammour K, Alhamad H, Al-Ashwal FY, Halboup A, Abu Farha R, Abu Hammour A. ChatGPT in pharmacy practice: a cross-sectional exploration of Jordanian pharmacists' perception, practice, and concerns. J Pharm Policy Pract 2023; 16:115. [PMID: 37789443 PMCID: PMC10548710 DOI: 10.1186/s40545-023-00624-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 09/22/2023] [Indexed: 10/05/2023] Open
Abstract
OBJECTIVES The purpose of this study is to find out how much pharmacists know and have used ChatGPT in their practice. We investigated the advantages and disadvantages of utilizing ChatGPT in a pharmacy context, the amount of training necessary to use it proficiently, and the influence on patient care using a survey. METHODS This cross-sectional study was carried out between May and June 2023 to assess the potential and problems that pharmacists observed while integrating chatbots powered by AI (ChatGPT) in pharmacy practice. The correlation between perceived benefits and concerns was evaluated using Spearman's rho correlation due to the data's non-normal distribution.Any pharmacists licensed by the Jordanian Pharmacists Association were included in the study. A convenient sampling technique was used to choose the participants, and the study questionnaire was distributed utilizing an online medium (Facebook and WhatsApp). Anyone who expressed interest in taking part was given a link to the study's instructions so they may read them before giving their electronic consent and accessing the survey. RESULTS The potential advantages of ChatGPT in the pharmacy practice were widely acknowledged by the participants. The majority of participants (69.9%) concurred that educational material about pharmacy items or therapeutic areas can be provided using ChatGPT, with 66.9% of respondents believing that ChatGPT is a machine learning algorithm. Concerns about the accuracy of AI-generated responses were also prevalent. More than half of the participants (55.7%) raised the possibility that AI systems such as ChatGPT could pick up on and replicate prejudices and discriminatory patterns from the data they were trained on. Analysis shows a statistically significant positive link, albeit a minor one, between the perceived advantages of ChatGPT and its drawbacks (r = 0.255, p < 0.001). However, concerns were strongly correlated with knowledge of ChatGPT. In contrast to those who were either unsure or had not heard of ChatGPT (64.2%), individuals who had heard of it were more likely to have strong concerns (79.8%) (p = 0.002). Finally, the results show a statistically significant association between the frequency of ChatGPT use and positive perceptions of the tool (p < 0.001). CONCLUSIONS Although ChatGPT has shown promise in health and pharmaceutical practice, its application should be rigorously regulated by evidence-based law. According to the study's findings, pharmacists support the use of ChatGPT in pharmacy practice but have concerns about its use due to ethical reasons, legal problems, privacy concerns, worries about the accuracy of the data generated, data learning, and bias risk.
Collapse
Affiliation(s)
- Khawla Abu Hammour
- Department of Clinical Pharmacy and Biopharmaceutics, Faculty of Pharmacy, University of Jordan, Amman, Jordan
| | - Hamza Alhamad
- Department of Clinical Pharmacy, Faculty of Pharmacy, Zarqa University, Zarqa, Jordan
| | - Fahmi Y Al-Ashwal
- Department of Clinical Pharmacy, College of Pharmacy, Al-Ayen University, Thi-Qar, Iraq.
- Department of Clinical Pharmacy and Pharmacy Practice, Faculty of Pharmacy, University of Science and Technology, Sana'a, Yemen.
| | - Abdulsalam Halboup
- Department of Clinical Pharmacy and Pharmacy Practice, Faculty of Pharmacy, University of Science and Technology, Sana'a, Yemen
- Discipline of Clinical Pharmacy, School of Pharmaceutical Sciences, University Sains Malaysia, Gelugor, Pulau Pinang, Malaysia
| | - Rana Abu Farha
- Clinical Pharmacy and Therapeutics Department, Faculty of Pharmacy, Applied Science Private University, P.O. Box 11937, Amman, Jordan
| | - Adnan Abu Hammour
- Medrise Medical Center, Dubai Healthcare City, Dubai, United Arab Emirates
| |
Collapse
|
34
|
Izbicka E, Streeper RT. Cancer drug development yesterday, today and tomorrow. Oncoscience 2023; 10:32-33. [PMID: 37601622 PMCID: PMC10434997 DOI: 10.18632/oncoscience.583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Indexed: 08/22/2023] Open
Affiliation(s)
- Elzbieta Izbicka
- Correspondence to:Elzbieta Izbicka, New Frontier Labs, San Antonio, TX 78240, USA email:
| | | |
Collapse
|
35
|
Da Silveira TBN, Lopes HS. Intelligence across humans and machines: a joint perspective. Front Psychol 2023; 14:1209761. [PMID: 37663348 PMCID: PMC10470035 DOI: 10.3389/fpsyg.2023.1209761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 08/01/2023] [Indexed: 09/05/2023] Open
Abstract
This paper aims to address the divergences and contradictions in the definition of intelligence across different areas of knowledge, particularly in computational intelligence and psychology, where the concept is of significant interest. Despite the differences in motivation and approach, both fields have contributed to the rise of cognitive science. However, the lack of a standardized definition, empirical evidence, or measurement strategy for intelligence is a hindrance to cross-fertilization between these areas, particularly for semantic-based applications. This paper seeks to equalize the definitions of intelligence from the perspectives of computational intelligence and psychology, and offer an overview of the methods used to measure intelligence. We argue that there is no consensus for intelligence, and the term is interchangeably used with similar, opposed, or even contradictory definitions in many fields. This paper concludes with a summary of its central considerations and contributions, where we state intelligence is an agent's ability to process external and internal information to find an optimum adaptation (decision-making) to the environment according to its ontology and then decode this information as an output action.
Collapse
Affiliation(s)
- Tiago Buatim Nion Da Silveira
- Computational Intelligence Laboratory (LABIC), Federal University of Technology – Paraná, Curitiba, Brazil
- Polytechnic School, University of Vale do Itajaí, Itajaí, Brazil
| | - Heitor Silvério Lopes
- Computational Intelligence Laboratory (LABIC), Federal University of Technology – Paraná, Curitiba, Brazil
| |
Collapse
|
36
|
Kreps S, George J, Lushenko P, Rao A. Exploring the artificial intelligence "Trust paradox": Evidence from a survey experiment in the United States. PLoS One 2023; 18:e0288109. [PMID: 37463148 DOI: 10.1371/journal.pone.0288109] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 06/20/2023] [Indexed: 07/20/2023] Open
Abstract
Advances in Artificial Intelligence (AI) are poised to transform society, national defense, and the economy by increasing efficiency, precision, and safety. Yet, widespread adoption within society depends on public trust and willingness to use AI-enabled technologies. In this study, we propose the possibility of an AI "trust paradox," in which individuals' willingness to use AI-enabled technologies exceeds their level of trust in these capabilities. We conduct a two-part study to explore the trust paradox. First, we conduct a conjoint analysis, varying different attributes of AI-enabled technologies in different domains-including armed drones, general surgery, police surveillance, self-driving cars, and social media content moderation-to evaluate whether and under what conditions a trust paradox may exist. Second, we use causal mediation analysis in the context of a second survey experiment to help explain why individuals use AI-enabled technologies that they do not trust. We find strong support for the trust paradox, particularly in the area of AI-enabled police surveillance, where the levels of support for its use are both higher than other domains but also significantly exceed trust. We unpack these findings to show that several underlying beliefs help account for public attitudes of support, including the fear of missing out, optimism that future versions of the technology will be more trustworthy, a belief that the benefits of AI-enabled technologies outweigh the risks, and calculation that AI-enabled technologies yield efficiency gains. Our findings have important implications for the integration of AI-enabled technologies in multiple settings.
Collapse
Affiliation(s)
- Sarah Kreps
- Cornell University Tech Policy Institute, Menlo Park, CA, United States of America
| | - Julie George
- Cornell University Tech Policy Institute, Menlo Park, CA, United States of America
- Stanford Center for International Security and Cooperation, Stanford, CA, United States of America
| | - Paul Lushenko
- Cornell University Tech Policy Institute, Menlo Park, CA, United States of America
| | - Adi Rao
- Cornell University Tech Policy Institute, Menlo Park, CA, United States of America
| |
Collapse
|
37
|
Deiana G, Dettori M, Arghittu A, Azara A, Gabutti G, Castiglia P. Artificial Intelligence and Public Health: Evaluating ChatGPT Responses to Vaccination Myths and Misconceptions. Vaccines (Basel) 2023; 11:1217. [PMID: 37515033 PMCID: PMC10386180 DOI: 10.3390/vaccines11071217] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 07/04/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
Artificial intelligence (AI) tools, such as ChatGPT, are the subject of intense debate regarding their possible applications in contexts such as health care. This study evaluates the Correctness, Clarity, and Exhaustiveness of the answers provided by ChatGPT on the topic of vaccination. The World Health Organization's 11 "myths and misconceptions" about vaccinations were administered to both the free (GPT-3.5) and paid version (GPT-4.0) of ChatGPT. The AI tool's responses were evaluated qualitatively and quantitatively, in reference to those myth and misconceptions provided by WHO, independently by two expert Raters. The agreement between the Raters was significant for both versions (p of K < 0.05). Overall, ChatGPT responses were easy to understand and 85.4% accurate although one of the questions was misinterpreted. Qualitatively, the GPT-4.0 responses were superior to the GPT-3.5 responses in terms of Correctness, Clarity, and Exhaustiveness (Δ = 5.6%, 17.9%, 9.3%, respectively). The study shows that, if appropriately questioned, AI tools can represent a useful aid in the health care field. However, when consulted by non-expert users, without the support of expert medical advice, these tools are not free from the risk of eliciting misleading responses. Moreover, given the existing social divide in information access, the improved accuracy of answers from the paid version raises further ethical issues.
Collapse
Affiliation(s)
- Giovanna Deiana
- Department of Biomedical Sciences, University of Sassari, 07100 Sassari, Italy
- Department of Medical, Surgical and Experimental Sciences, University Hospital of Sassari, 07100 Sassari, Italy
| | - Marco Dettori
- Department of Medical, Surgical and Experimental Sciences, University Hospital of Sassari, 07100 Sassari, Italy
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
- Department of Restorative, Pediatric and Preventive Dentistry, University of Bern, 3012 Bern, Switzerland
| | - Antonella Arghittu
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
| | - Antonio Azara
- Department of Medical, Surgical and Experimental Sciences, University Hospital of Sassari, 07100 Sassari, Italy
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
| | - Giovanni Gabutti
- Working Group "Vaccines and Immunization Policies", Italian Society of Hygiene, Preventive Medicine and Public Health, 16030 Cogorno, Italy
| | - Paolo Castiglia
- Department of Medical, Surgical and Experimental Sciences, University Hospital of Sassari, 07100 Sassari, Italy
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
- Working Group "Vaccines and Immunization Policies", Italian Society of Hygiene, Preventive Medicine and Public Health, 16030 Cogorno, Italy
| |
Collapse
|
38
|
Alqahtani T, Badreldin HA, Alrashed M, Alshaya AI, Alghamdi SS, Bin Saleh K, Alowais SA, Alshaya OA, Rahman I, Al Yami MS, Albekairy AM. The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research. Res Social Adm Pharm 2023:S1551-7411(23)00280-2. [PMID: 37321925 DOI: 10.1016/j.sapharm.2023.05.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/29/2023] [Accepted: 05/30/2023] [Indexed: 06/17/2023]
Abstract
Artificial Intelligence (AI) has revolutionized various domains, including education and research. Natural language processing (NLP) techniques and large language models (LLMs) such as GPT-4 and BARD have significantly advanced our comprehension and application of AI in these fields. This paper provides an in-depth introduction to AI, NLP, and LLMs, discussing their potential impact on education and research. By exploring the advantages, challenges, and innovative applications of these technologies, this review gives educators, researchers, students, and readers a comprehensive view of how AI could shape educational and research practices in the future, ultimately leading to improved outcomes. Key applications discussed in the field of research include text generation, data analysis and interpretation, literature review, formatting and editing, and peer review. AI applications in academics and education include educational support and constructive feedback, assessment, grading, tailored curricula, personalized career guidance, and mental health support. Addressing the challenges associated with these technologies, such as ethical concerns and algorithmic biases, is essential for maximizing their potential to improve education and research outcomes. Ultimately, the paper aims to contribute to the ongoing discussion about the role of AI in education and research and highlight its potential to lead to better outcomes for students, educators, and researchers.
Collapse
Affiliation(s)
- Tariq Alqahtani
- Department of Pharmaceutical Sciences, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Saudi Arabia; King Abdullah International Medical Research Center, Riyadh, Saudi Arabia.
| | - Hisham A Badreldin
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Mohammed Alrashed
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Abdulrahman I Alshaya
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Sahar S Alghamdi
- Department of Pharmaceutical Sciences, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Saudi Arabia; King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
| | - Khalid Bin Saleh
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Shuroug A Alowais
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Omar A Alshaya
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Ishrat Rahman
- Department of Basic Dental Sciences, College of Dentistry, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Majed S Al Yami
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Abdulkareem M Albekairy
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia; Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| |
Collapse
|
39
|
Obasa AE, Palk AC. Responsible application of artificial intelligence in health care. S AFR J SCI 2023; 119:14889. [PMID: 39328370 PMCID: PMC11426230 DOI: 10.17159/sajs.2023/14889] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 04/05/2023] [Indexed: 09/28/2024] Open
Affiliation(s)
- Adetayo E Obasa
- Centre for Medical Ethics and Law, WHO Bioethics Collaborating Centre, Department of Medicine, Stellenbosch University, Cape Town, South Africa
| | - Andrea C Palk
- Unit for Bioethics, Centre for Applied Ethics, Philosophy Department, Stellenbosch University, Stellenbosch, South Africa
| |
Collapse
|
40
|
Ghosh A, Bir A. Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry. Cureus 2023; 15:e37023. [PMID: 37143631 PMCID: PMC10152308 DOI: 10.7759/cureus.37023] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/02/2023] [Indexed: 04/04/2023] Open
Abstract
Background Healthcare-related artificial intelligence (AI) is developing. The capacity of the system to carry out sophisticated cognitive processes, such as problem-solving, decision-making, reasoning, and perceiving, is referred to as higher cognitive thinking in AI. This kind of thinking requires more than just processing facts; it also entails comprehending and working with abstract ideas, evaluating and applying data relevant to the context, and producing new insights based on prior learning and experience. ChatGPT is an artificial intelligence-based conversational software that can engage with people to answer questions and uses natural language processing models. The platform has created a worldwide buzz and keeps setting an ongoing trend in solving many complex problems in various dimensions. Nevertheless, ChatGPT's capacity to correctly respond to queries requiring higher-level thinking in medical biochemistry has not yet been investigated. So, this research aimed to evaluate ChatGPT's aptitude for responding to higher-order questions on medical biochemistry. Objective In this study, our objective was to determine whether ChatGPT can address higher-order problems related to medical biochemistry. Methods This cross-sectional study was done online by conversing with the current version of ChatGPT (14 March 2023, which is presently free for registered users). It was presented with 200 medical biochemistry reasoning questions that require higher-order thinking. These questions were randomly picked from the institution's question bank and classified according to the Competency-Based Medical Education (CBME) curriculum's competency modules. The responses were collected and archived for subsequent research. Two expert biochemistry academicians examined the replies on a zero to five scale. The score's accuracy was determined by a one-sample Wilcoxon signed rank test using hypothetical values. Result The AI software answered 200 questions requiring higher-order thinking with a median score of 4.0 (Q1=3.50, Q3=4.50). Using a single sample Wilcoxon signed rank test, the result was less than the hypothetical maximum of five (p=0.001) and comparable to four (p=0.16). There was no difference in the replies to questions from different CBME modules in medical biochemistry (Kruskal-Wallis p=0.39). The inter-rater reliability of the scores scored by two biochemistry faculty members was outstanding (ICC=0.926 (95% CI: 0.814-0.971); F=19; p=0.001) Conclusion The results of this research indicate that ChatGPT has the potential to be a successful tool for answering questions requiring higher-order thinking in medical biochemistry, with a median score of four out of five. However, continuous training and development with data of recent advances are essential to improve performance and make it functional for the ever-growing field of academic medical usage.
Collapse
|
41
|
Sallam M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Basel) 2023; 11:887. [PMID: 36981544 PMCID: PMC10048148 DOI: 10.3390/healthcare11060887] [Citation(s) in RCA: 562] [Impact Index Per Article: 562.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/17/2023] [Accepted: 03/17/2023] [Indexed: 03/22/2023] Open
Abstract
ChatGPT is an artificial intelligence (AI)-based conversational large language model (LLM). The potential applications of LLMs in health care education, research, and practice could be promising if the associated valid concerns are proactively examined and addressed. The current systematic review aimed to investigate the utility of ChatGPT in health care education, research, and practice and to highlight its potential limitations. Using the PRIMSA guidelines, a systematic search was conducted to retrieve English records in PubMed/MEDLINE and Google Scholar (published research or preprints) that examined ChatGPT in the context of health care education, research, or practice. A total of 60 records were eligible for inclusion. Benefits of ChatGPT were cited in 51/60 (85.0%) records and included: (1) improved scientific writing and enhancing research equity and versatility; (2) utility in health care research (efficient analysis of datasets, code generation, literature reviews, saving time to focus on experimental design, and drug discovery and development); (3) benefits in health care practice (streamlining the workflow, cost saving, documentation, personalized medicine, and improved health literacy); and (4) benefits in health care education including improved personalized learning and the focus on critical thinking and problem-based learning. Concerns regarding ChatGPT use were stated in 58/60 (96.7%) records including ethical, copyright, transparency, and legal issues, the risk of bias, plagiarism, lack of originality, inaccurate content with risk of hallucination, limited knowledge, incorrect citations, cybersecurity issues, and risk of infodemics. The promising applications of ChatGPT can induce paradigm shifts in health care education, research, and practice. However, the embrace of this AI chatbot should be conducted with extreme caution considering its potential limitations. As it currently stands, ChatGPT does not qualify to be listed as an author in scientific articles unless the ICMJE/COPE guidelines are revised or amended. An initiative involving all stakeholders in health care education, research, and practice is urgently needed. This will help to set a code of ethics to guide the responsible use of ChatGPT among other LLMs in health care and academia.
Collapse
Affiliation(s)
- Malik Sallam
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman 11942, Jordan; ; Tel.: +962-79-184-5186
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman 11942, Jordan
| |
Collapse
|
42
|
Li C, Zhang Y, Niu X, Chen F, Zhou H. Does Artificial Intelligence Promote or Inhibit On-the-Job Learning? Human Reactions to AI at Work. SYSTEMS 2023; 11:114. [DOI: 10.3390/systems11030114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/22/2024]
Abstract
This paper examines how AI at work impacts on-the-job learning, shedding light on workers’ reactions to the groundbreaking AI technology. Based on theoretical analysis, six hypotheses are proposed regarding three aspects of AI’s influence on on-the-job learning. Empirical results demonstrate that AI significantly inhibits people’s on-the-job learning and this conclusion holds true in a series of robustness and endogeneity checks. The impact mechanism is that AI makes workers more pessimistic about the future, leading to burnout and less motivation for on-the-job learning. In addition, AI’s replacement, mismatch, and deskilling effects decrease people’s income while extending working hours, reducing their available financial resources and disposable time for further learning. Moreover, it has been found that AI’s impact on on-the-job learning is more prominent for older, female and less-educated employees, as well as those without labor contracts and with less job autonomy and work experience. In regions with more intense human–AI competition, more labor-management conflicts, and poorer labor protection, the inhibitory effect of AI on further learning is more pronounced. In the context of the fourth technological revolution driving forward the intelligent transformation, findings of this paper have important implications for enterprises to better understand employee behaviors and to promote them to acquire new skills to achieve better human–AI teaming.
Collapse
Affiliation(s)
- Chao Li
- Business School, Shandong University, Weihai 264209, China
| | - Yuhan Zhang
- HSBC Business School, Peking University, Shenzhen 518055, China
| | - Xiaoru Niu
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China
| | - Feier Chen
- Business School, Shandong University, Weihai 264209, China
| | - Hongyan Zhou
- Business School, Shandong University, Weihai 264209, China
| |
Collapse
|
43
|
Sinha RK, Deb Roy A, Kumar N, Mondal H. Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology. Cureus 2023; 15:e35237. [PMID: 36968864 PMCID: PMC10033699 DOI: 10.7759/cureus.35237] [Citation(s) in RCA: 54] [Impact Index Per Article: 54.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2023] [Indexed: 02/23/2023] Open
Abstract
Background Artificial intelligence (AI) is evolving for healthcare services. Higher cognitive thinking in AI refers to the ability of the system to perform advanced cognitive processes, such as problem-solving, decision-making, reasoning, and perception. This type of thinking goes beyond simple data processing and involves the ability to understand and manipulate abstract concepts, interpret, and use information in a contextually relevant way, and generate new insights based on past experiences and accumulated knowledge. Natural language processing models like ChatGPT is a conversational program that can interact with humans to provide answers to queries. Objective We aimed to ascertain the capability of ChatGPT in solving higher-order reasoning in the subject of pathology. Methods This cross-sectional study was conducted on the internet using an AI-based chat program that provides free service for research purposes. The current version of ChatGPT (January 30 version) was used to converse with a total of 100 higher-order reasoning queries. These questions were randomly selected from the question bank of the institution and categorized according to different systems. The responses to each question were collected and stored for further analysis. The responses were evaluated by three expert pathologists on a zero to five scale and categorized into the structure of the observed learning outcome (SOLO) taxonomy categories. The score was compared by a one-sample median test with hypothetical values to find its accuracy. Result A total of 100 higher-order reasoning questions were solved by the program in an average of 45.31±7.14 seconds for an answer. The overall median score was 4.08 (Q1-Q3: 4-4.33) which was below the hypothetical maximum value of five (one-test median test p <0.0001) and similar to four (one-test median test p = 0.14). The majority (86%) of the responses were in the "relational" category in the SOLO taxonomy. There was no difference in the scores of the responses for questions asked from various organ systems in the subject of Pathology (Kruskal Wallis p = 0.55). The scores rated by three pathologists had an excellent level of inter-rater reliability (ICC = 0.975 [95% CI: 0.965-0.983]; F = 40.26; p < 0.0001). Conclusion The capability of ChatGPT to solve higher-order reasoning questions in pathology had a relational level of accuracy. Hence, the text output had connections among its parts to provide a meaningful response. The answers from the program can score approximately 80%. Hence, academicians or students can get help from the program for solving reasoning-type questions also. As the program is evolving, further studies are needed to find its accuracy level in any further versions.
Collapse
|
44
|
Gago F. Computational Approaches to Enzyme Inhibition by Marine Natural Products in the Search for New Drugs. Mar Drugs 2023; 21:100. [PMID: 36827141 PMCID: PMC9961086 DOI: 10.3390/md21020100] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/26/2023] [Accepted: 01/28/2023] [Indexed: 02/03/2023] Open
Abstract
The exploration of biologically relevant chemical space for the discovery of small bioactive molecules present in marine organisms has led not only to important advances in certain therapeutic areas, but also to a better understanding of many life processes. The still largely untapped reservoir of countless metabolites that play biological roles in marine invertebrates and microorganisms opens new avenues and poses new challenges for research. Computational technologies provide the means to (i) organize chemical and biological information in easily searchable and hyperlinked databases and knowledgebases; (ii) carry out cheminformatic analyses on natural products; (iii) mine microbial genomes for known and cryptic biosynthetic pathways; (iv) explore global networks that connect active compounds to their targets (often including enzymes); (v) solve structures of ligands, targets, and their respective complexes using X-ray crystallography and NMR techniques, thus enabling virtual screening and structure-based drug design; and (vi) build molecular models to simulate ligand binding and understand mechanisms of action in atomic detail. Marine natural products are viewed today not only as potential drugs, but also as an invaluable source of chemical inspiration for the development of novel chemotypes to be used in chemical biology and medicinal chemistry research.
Collapse
Affiliation(s)
- Federico Gago
- Department of Biomedical Sciences & IQM-CSIC Associate Unit, School of Medicine and Health Sciences, University of Alcalá, E-28805 Madrid, Alcalá de Henares, Spain
| |
Collapse
|
45
|
Ruschemeier H. AI as a challenge for legal regulation – the scope of application of the artificial intelligence act proposal. ERA FORUM 2023. [PMCID: PMC9827441 DOI: 10.1007/s12027-022-00725-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
The proposal for the Artificial Intelligence Act is the first comprehensive attempt to legally regulate AI. Not merely because of this pioneering role, the draft has been the subject of controversial debates about whether it uses the right regulatory technique, regarding its scope of application and whether it has sufficient protective effect. Moreover, systematic questions arise as to how the regulation of constantly evolving, dynamic technologies can succeed using the means of the law. The choice of the designation as Artificial Intelligence Act leads to legal-theoretical questions of concept formation as a legal method and legislative technique. This article examines the difficulties of regulating the concept of AI using the scope of the Artificial Intelligence Act as an example.
Collapse
Affiliation(s)
- Hannah Ruschemeier
- grid.31730.360000 0001 1534 0348Rechtswissenschaftliche Fakultät, FernUniversität in Hagen, Universitätsstraße 4, 58097 Hagen, Germany
| |
Collapse
|
46
|
Wu C, Xu H, Bai D, Chen X, Gao J, Jiang X. Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis. BMJ Open 2023; 13:e066322. [PMID: 36599634 PMCID: PMC9815015 DOI: 10.1136/bmjopen-2022-066322] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
OBJECTIVES Medical artificial intelligence (AI) has been used widely applied in clinical field due to its convenience and innovation. However, several policy and regulatory issues such as credibility, sharing of responsibility and ethics have raised concerns in the use of AI. It is therefore necessary to understand the general public's views on medical AI. Here, a meta-synthesis was conducted to analyse and summarise the public's understanding of the application of AI in the healthcare field, to provide recommendations for future use and management of AI in medical practice. DESIGN This was a meta-synthesis of qualitative studies. METHOD A search was performed on the following databases to identify studies published in English and Chinese: MEDLINE, CINAHL, Web of science, Cochrane library, Embase, PsycINFO, CNKI, Wanfang and VIP. The search was conducted from database inception to 25 December 2021. The meta-aggregation approach of JBI was used to summarise findings from qualitative studies, focusing on the public's perception of the application of AI in healthcare. RESULTS Of the 5128 studies screened, 12 met the inclusion criteria, hence were incorporated into analysis. Three synthesised findings were used as the basis of our conclusions, including advantages of medical AI from the public's perspective, ethical and legal concerns about medical AI from the public's perspective, and public suggestions on the application of AI in medical field. CONCLUSION Results showed that the public acknowledges the unique advantages and convenience of medical AI. Meanwhile, several concerns about the application of medical AI were observed, most of which involve ethical and legal issues. The standard application and reasonable supervision of medical AI is key to ensuring its effective utilisation. Based on the public's perspective, this analysis provides insights and suggestions for health managers on how to implement and apply medical AI smoothly, while ensuring safety in healthcare practice. PROSPERO REGISTRATION NUMBER CRD42022315033.
Collapse
Affiliation(s)
- Chenxi Wu
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Huiqiong Xu
- West China School of Nursing,Sichuan University/ Abdominal Oncology Ward, Cancer Center,West China Hospital, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Dingxi Bai
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xinyu Chen
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Jing Gao
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xiaolian Jiang
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
47
|
Yogarajan V, Dobbie G, Leitch S, Keegan TT, Bensemann J, Witbrock M, Asrani V, Reith D. Data and model bias in artificial intelligence for healthcare applications in New Zealand. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1070493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
IntroductionDevelopments in Artificial Intelligence (AI) are adopted widely in healthcare. However, the introduction and use of AI may come with biases and disparities, resulting in concerns about healthcare access and outcomes for underrepresented indigenous populations. In New Zealand, Māori experience significant inequities in health compared to the non-Indigenous population. This research explores equity concepts and fairness measures concerning AI for healthcare in New Zealand.MethodsThis research considers data and model bias in NZ-based electronic health records (EHRs). Two very distinct NZ datasets are used in this research, one obtained from one hospital and another from multiple GP practices, where clinicians obtain both datasets. To ensure research equality and fair inclusion of Māori, we combine expertise in Artificial Intelligence (AI), New Zealand clinical context, and te ao Māori. The mitigation of inequity needs to be addressed in data collection, model development, and model deployment. In this paper, we analyze data and algorithmic bias concerning data collection and model development, training and testing using health data collected by experts. We use fairness measures such as disparate impact scores, equal opportunities and equalized odds to analyze tabular data. Furthermore, token frequencies, statistical significance testing and fairness measures for word embeddings, such as WEAT and WEFE frameworks, are used to analyze bias in free-form medical text. The AI model predictions are also explained using SHAP and LIME.ResultsThis research analyzed fairness metrics for NZ EHRs while considering data and algorithmic bias. We show evidence of bias due to the changes made in algorithmic design. Furthermore, we observe unintentional bias due to the underlying pre-trained models used to represent text data. This research addresses some vital issues while opening up the need and opportunity for future research.DiscussionsThis research takes early steps toward developing a model of socially responsible and fair AI for New Zealand's population. We provided an overview of reproducible concepts that can be adopted toward any NZ population data. Furthermore, we discuss the gaps and future research avenues that will enable more focused development of fairness measures suitable for the New Zealand population's needs and social structure. One of the primary focuses of this research was ensuring fair inclusions. As such, we combine expertise in AI, clinical knowledge, and the representation of indigenous populations. This inclusion of experts will be vital moving forward, proving a stepping stone toward the integration of AI for better outcomes in healthcare.
Collapse
|
48
|
Marmolejo-Ramos F, Workman T, Walker C, Lenihan D, Moulds S, Correa JC, Hanea AM, Sonna B. AI-powered narrative building for facilitating public participation and engagement. DISCOVER ARTIFICIAL INTELLIGENCE 2022. [PMCID: PMC8967379 DOI: 10.1007/s44163-022-00023-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Algorithms, data, and AI (ADA) technologies permeate most societies worldwide because of their proven benefits in different areas of life. Governments are the entities in charge of harnessing the benefits of ADA technologies above and beyond providing government services digitally. ADA technologies have the potential to transform the way governments develop and deliver services to citizens, and the way citizens engage with their governments. Conventional public engagement strategies employed by governments have limited both the quality and diversity of deliberation between the citizen and their governments, and the potential for ADA technologies to be employed to improve the experience for both governments and the citizens they serve. In this article we argue that ADA technologies can improve the quality, scope, and reach of public engagement by governments, particularly when coupled with other strategies to ensure legitimacy and accessibility among a broad range of communities and other stakeholders. In particular, we explore the role “narrative building” (NB) can play in facilitating public engagement through the use of ADA technologies. We describe a theoretical implementation of NB enhanced by adding natural language processing, expert knowledge elicitation, and semantic differential rating scales capabilities to increase gains in scale and reach. The theoretical implementation focuses on the public’s opinion on ADA-related technologies, and it derives implications for ethical governance.
Collapse
Affiliation(s)
- Fernando Marmolejo-Ramos
- Centre for Change and Complexity in Learning, The University of South Australia, Adelaide, SA 5000 Australia
| | | | | | - Don Lenihan
- Middle Ground Policy Research CA, Ottawa, Canada
| | - Sarah Moulds
- UniSA Justice & Society, The University of South Australia, Adelaide, Australia
| | | | - Anca M. Hanea
- Ecosystem and Forest Sciences, University of Melbourne, Melbourne, Australia
| | - Belona Sonna
- African Institute for Mathematical Sciences, Kigali, Rwanda
| |
Collapse
|
49
|
Devedzic V. Identity of AI. DISCOVER ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1007/s44163-022-00038-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
AbstractWith the explosion of Artificial Intelligence (AI) as an area of study and practice, it has gradually become very difficult to mark its boundaries precisely and specify what exactly it encompasses. Many other areas of study are interwoven with AI, and new research and development topics that require interdisciplinary approach frequently attract attention. In addition, several AI subfields and topics are home to long-time controversies that give rise to seemingly never-ending debates that further obfuscate the entire area of AI and make its boundaries even more indistinct. To tackle such problems in a systematic way, this paper introduces the concept of identity of AI (viewed as an area of study) and discusses its dynamics, controversies, contradictions, and opposing opinions and approaches, coming from different sources and stakeholders. The concept of identity of AI emerges as a set of characteristics that shape up the current outlook on AI from epistemological, philosophical, ethical, technological, and social perspectives.
Collapse
|
50
|
Humble N, Mozelius P. The threat, hype, and promise of artificial intelligence in education. DISCOVER ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1007/s44163-022-00039-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
AbstractThe idea of building intelligent machines has been around for centuries, with a new wave of promising artificial intelligence (AI) in the twenty-first century. Artificial Intelligence in Education (AIED) is a younger phenomenon that has created hype and promises, but also been seen as a threat by critical voices. There have been rich discussions on over-optimism and hype in contemporary AI research. Less has been written about the hyped expectations on AIED and its potential to transform current education. There is huge potential for efficiency and cost reduction, but there is also aspects of quality education and the teacher role. The aim of the study is to identify potential aspects of threat, hype and promise in artificial intelligence for education. A scoping literature review was conducted to gather relevant state-of-the art research in the field of AIED. Main keywords used in the literature search were: artificial intelligence, artificial intelligence in education, AI, AIED, teacher perspective, education, and teacher. Data were analysed with the SWOT-framework as theoretical lens for a thematic analysis. The study identifies a wide variety of strengths, weaknesses, opportunities, and threats for artificial intelligence in education. Findings suggest that there are several important questions to discuss and address in future research, such as: What should the role of the teacher be in education with AI? How does AI align with pedagogical goals and beliefs? And how to handle the potential leak and misuse of user data when AIED systems are developed by for-profit organisations?
Collapse
|