1
|
El-Tallawy SN, Pergolizzi JV, Vasiliu-Feltes I, Ahmed RS, LeQuang JK, El-Tallawy HN, Varrassi G, Nagiub MS. Incorporation of "Artificial Intelligence" for Objective Pain Assessment: A Comprehensive Review. Pain Ther 2024; 13:293-317. [PMID: 38430433 PMCID: PMC11111436 DOI: 10.1007/s40122-024-00584-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 02/08/2024] [Indexed: 03/03/2024] Open
Abstract
Pain is a significant health issue, and pain assessment is essential for proper diagnosis, follow-up, and effective management of pain. The conventional methods of pain assessment often suffer from subjectivity and variability. The main issue is to understand better how people experience pain. In recent years, artificial intelligence (AI) has been playing a growing role in improving clinical diagnosis and decision-making. The application of AI offers promising opportunities to improve the accuracy and efficiency of pain assessment. This review article provides an overview of the current state of AI in pain assessment and explores its potential for improving accuracy, efficiency, and personalized care. By examining the existing literature, research gaps, and future directions, this article aims to guide further advancements in the field of pain management. An online database search was conducted via multiple websites to identify the relevant articles. The inclusion criteria were English articles published between January 2014 and January 2024). Articles that were available as full text clinical trials, observational studies, review articles, systemic reviews, and meta-analyses were included in this review. The exclusion criteria were articles that were not in the English language, not available as free full text, those involving pediatric patients, case reports, and editorials. A total of (47) articles were included in this review. In conclusion, the application of AI in pain management could present promising solutions for pain assessment. AI can potentially increase the accuracy, precision, and efficiency of objective pain assessment.
Collapse
Affiliation(s)
- Salah N El-Tallawy
- Anesthesia and Pain Department, College of Medicine, King Khalid University Hospital, King Saud University, Riyadh, Saudi Arabia.
- Anesthesia and Pain Department, Faculty of Medicine, Minia University & NCI, Cairo University, Giza, Egypt.
| | | | - Ingrid Vasiliu-Feltes
- Science, Entrepreneurship and Investments Institute, University of Miami, Miami, USA
| | - Rania S Ahmed
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | | | | | | | | |
Collapse
|
2
|
Liu Z, Zhang L, Wu Z, Yu X, Cao C, Dai H, Liu N, Liu J, Liu W, Li Q, Shen D, Li X, Zhu D, Liu T. Surviving ChatGPT in healthcare. FRONTIERS IN RADIOLOGY 2024; 3:1224682. [PMID: 38464946 PMCID: PMC10920216 DOI: 10.3389/fradi.2023.1224682] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 07/25/2023] [Indexed: 03/12/2024]
Abstract
At the dawn of of Artificial General Intelligence (AGI), the emergence of large language models such as ChatGPT show promise in revolutionizing healthcare by improving patient care, expanding medical access, and optimizing clinical processes. However, their integration into healthcare systems requires careful consideration of potential risks, such as inaccurate medical advice, patient privacy violations, the creation of falsified documents or images, overreliance on AGI in medical education, and the perpetuation of biases. It is crucial to implement proper oversight and regulation to address these risks, ensuring the safe and effective incorporation of AGI technologies into healthcare systems. By acknowledging and mitigating these challenges, AGI can be harnessed to enhance patient care, medical knowledge, and healthcare processes, ultimately benefiting society as a whole.
Collapse
Affiliation(s)
- Zhengliang Liu
- School of Computing, University of Georgia, Athens, GA, United States
| | - Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, United States
| | - Zihao Wu
- School of Computing, University of Georgia, Athens, GA, United States
| | - Xiaowei Yu
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, United States
| | - Chao Cao
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, United States
| | - Haixing Dai
- School of Computing, University of Georgia, Athens, GA, United States
| | - Ninghao Liu
- School of Computing, University of Georgia, Athens, GA, United States
| | - Jun Liu
- Department of Radiology, Second Xiangya Hospital, Changsha, Hunan, China
| | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, Scottsdale, AZ, United States
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Department of Research and Development, Shanhai United Imaging Intelligence Co., Ltd., Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, United States
| | - Tianming Liu
- School of Computing, University of Georgia, Athens, GA, United States
| |
Collapse
|
3
|
Liao W, Liu Z, Dai H, Xu S, Wu Z, Zhang Y, Huang X, Zhu D, Cai H, Li Q, Liu T, Li X. Differentiating ChatGPT-Generated and Human-Written Medical Texts: Quantitative Study. JMIR MEDICAL EDUCATION 2023; 9:e48904. [PMID: 38153785 PMCID: PMC10784984 DOI: 10.2196/48904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 08/03/2023] [Accepted: 09/10/2023] [Indexed: 12/29/2023]
Abstract
BACKGROUND Large language models, such as ChatGPT, are capable of generating grammatically perfect and human-like text content, and a large number of ChatGPT-generated texts have appeared on the internet. However, medical texts, such as clinical notes and diagnoses, require rigorous validation, and erroneous medical content generated by ChatGPT could potentially lead to disinformation that poses significant harm to health care and the general public. OBJECTIVE This study is among the first on responsible artificial intelligence-generated content in medicine. We focus on analyzing the differences between medical texts written by human experts and those generated by ChatGPT and designing machine learning workflows to effectively detect and differentiate medical texts generated by ChatGPT. METHODS We first constructed a suite of data sets containing medical texts written by human experts and generated by ChatGPT. We analyzed the linguistic features of these 2 types of content and uncovered differences in vocabulary, parts-of-speech, dependency, sentiment, perplexity, and other aspects. Finally, we designed and implemented machine learning methods to detect medical text generated by ChatGPT. The data and code used in this paper are published on GitHub. RESULTS Medical texts written by humans were more concrete, more diverse, and typically contained more useful information, while medical texts generated by ChatGPT paid more attention to fluency and logic and usually expressed general terminologies rather than effective information specific to the context of the problem. A bidirectional encoder representations from transformers-based model effectively detected medical texts generated by ChatGPT, and the F1 score exceeded 95%. CONCLUSIONS Although text generated by ChatGPT is grammatically perfect and human-like, the linguistic characteristics of generated medical texts were different from those written by human experts. Medical text generated by ChatGPT could be effectively detected by the proposed machine learning algorithms. This study provides a pathway toward trustworthy and accountable use of large language models in medicine.
Collapse
Affiliation(s)
- Wenxiong Liao
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Zhengliang Liu
- School of Computing, University of Georgia, Athens, GA, United States
| | - Haixing Dai
- School of Computing, University of Georgia, Athens, GA, United States
| | - Shaochen Xu
- School of Computing, University of Georgia, Athens, GA, United States
| | - Zihao Wu
- School of Computing, University of Georgia, Athens, GA, United States
| | - Yiyang Zhang
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Xiaoke Huang
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Dajiang Zhu
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, United States
| | - Hongmin Cai
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Tianming Liu
- School of Computing, University of Georgia, Athens, GA, United States
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| |
Collapse
|