1
|
Suárez A, Jiménez J, Llorente de Pedro M, Andreu-Vázquez C, Díaz-Flores García V, Gómez Sánchez M, Freire Y. Beyond the Scalpel: Assessing ChatGPT's potential as an auxiliary intelligent virtual assistant in oral surgery. Comput Struct Biotechnol J 2024; 24:46-52. [PMID: 38162955 PMCID: PMC10755495 DOI: 10.1016/j.csbj.2023.11.058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 11/28/2023] [Accepted: 11/28/2023] [Indexed: 01/03/2024] Open
Abstract
AI has revolutionized the way we interact with technology. Noteworthy advances in AI algorithms and large language models (LLM) have led to the development of natural generative language (NGL) systems such as ChatGPT. Although these LLM can simulate human conversations and generate content in real time, they face challenges related to the topicality and accuracy of the information they generate. This study aimed to assess whether ChatGPT-4 could provide accurate and reliable answers to general dentists in the field of oral surgery, and thus explore its potential as an intelligent virtual assistant in clinical decision making in oral surgery. Thirty questions related to oral surgery were posed to ChatGPT4, each question repeated 30 times. Subsequently, a total of 900 responses were obtained. Two surgeons graded the answers according to the guidelines of the Spanish Society of Oral Surgery, using a three-point Likert scale (correct, partially correct/incomplete, and incorrect). Disagreements were arbitrated by an experienced oral surgeon, who provided the final grade Accuracy was found to be 71.7%, and consistency of the experts' grading across iterations, ranged from moderate to almost perfect. ChatGPT-4, with its potential capabilities, will inevitably be integrated into dental disciplines, including oral surgery. In the future, it could be considered as an auxiliary intelligent virtual assistant, though it would never replace oral surgery experts. Proper training and verified information by experts will remain vital to the implementation of the technology. More comprehensive research is needed to ensure the safe and successful application of AI in oral surgery.
Collapse
Affiliation(s)
- Ana Suárez
- Department of Pre-Clinic Dentistry, Faculty of Biomedical and Health Sciences, Universidad Europea de Madrid, Calle Tajo s/n, Villaviciosa de Odón, 28670 Madrid, Spain
| | - Jaime Jiménez
- Department of Clinic Dentistry, Faculty of Biomedical and Health Sciences, Universidad Europea de Madrid, Calle Tajo s/n, Villaviciosa de Odón, 28670 Madrid, Spain
| | - María Llorente de Pedro
- Department of Pre-Clinic Dentistry, Faculty of Biomedical and Health Sciences, Universidad Europea de Madrid, Calle Tajo s/n, Villaviciosa de Odón, 28670 Madrid, Spain
| | - Cristina Andreu-Vázquez
- Department of Veterinary Medicine, Faculty of Biomedical and Health Sciences, Universidad Europea de Madrid, Calle Tajo s/n, Villaviciosa de Odón, 28670 Madrid, Spain
| | - Víctor Díaz-Flores García
- Department of Pre-Clinic Dentistry, Faculty of Biomedical and Health Sciences, Universidad Europea de Madrid, Calle Tajo s/n, Villaviciosa de Odón, 28670 Madrid, Spain
| | - Margarita Gómez Sánchez
- Department of Pre-Clinic Dentistry, Faculty of Biomedical and Health Sciences, Universidad Europea de Madrid, Calle Tajo s/n, Villaviciosa de Odón, 28670 Madrid, Spain
| | - Yolanda Freire
- Department of Pre-Clinic Dentistry, Faculty of Biomedical and Health Sciences, Universidad Europea de Madrid, Calle Tajo s/n, Villaviciosa de Odón, 28670 Madrid, Spain
| |
Collapse
|
2
|
Peschel E, Krotsetis S, Seidlein AH, Nydahl P. Opening Pandora's box by generating ICU diaries through artificial intelligence: A hypothetical study protocol. Intensive Crit Care Nurs 2024; 82:103661. [PMID: 38394982 DOI: 10.1016/j.iccn.2024.103661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 02/09/2024] [Accepted: 02/16/2024] [Indexed: 02/25/2024]
Abstract
BACKGROUND Patients and families on Intensive Care Units (ICU) benefit from ICU diaries, enhancing their coping and understanding of their experiences. Staff shortages and a limited amount of time severely restrict the application of ICU diaries. To counteract this limitation, generating diary entries from medical and nursing records using an artificial intelligence (AI) might be a solution. DESIGN AND PURPOSE Protocol for a hypothetical multi-center, mixed method study to identify the usability and impact of AI-generated ICU diaries, compared with hand-written diaries. METHOD A hand-written ICU diary will be written for patients with expected length of stay ≥ 72 h by trained nursing staff and families. Additionally at discharge, the medical and nursing records are analyzed by an AI software, transformed into understandable, empathic diary entries, and printed as diary. Based on an appointment with patients within 3 months, diaries are read in randomized order by trained clinicians with the patients and families. Patients and families will be interviewed about their experiences of reading both diaries. In addition, usability of diaries will be evaluated by a questionnaire. EXPECTED FINDINGS AND RESULTS Patients and families describe the similarities and differences of language and the content of the different diaries. In addition, concerns can be expressed about the generation and data processing by AI. IMPLICATIONS FOR PRACTICE Professional nursing involves empathic communication, patient-centered care, and evidence-based interventions. Diaries, beneficial for ICU patients and families, could potentially be generated by Artificial Intelligence, raising ethical and professional considerations about AI's role in complementing or substituting nurses in diary writing. CONCLUSIONS Generating AI-based entries for ICU diaries is feasible, but raises serious questions about nursing ethics, empathy, data protection, and values of professional nurses. Researchers and developers shall discuss these questions in detail, before starting such projects and opening Pandora's box, that can never be closed afterwards.
Collapse
Affiliation(s)
- Ella Peschel
- University Hospital of Schleswig-Holstein, Kiel, Germany
| | | | | | - Peter Nydahl
- University Hospital of Schleswig-Holstein, Nursing Research and Development, Kiel, Germany; Nursing Science and Development, Paracelsus Medical University, Salzburg, Austria.
| |
Collapse
|
3
|
Mellander H, Hillal A, Ullberg T, Wassélius J. Evaluation of CINA® LVO artificial intelligence software for detection of large vessel occlusion in brain CT angiography. Eur J Radiol Open 2024; 12:100542. [PMID: 38188638 PMCID: PMC10764253 DOI: 10.1016/j.ejro.2023.100542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 11/17/2023] [Accepted: 12/10/2023] [Indexed: 01/09/2024] Open
Abstract
Objective To systematically evaluate the ability of the CINA® LVO software to detect large vessel occlusions eligible for mechanical thrombectomy on CTA using conventional neuroradiological assessment as gold standard. Methods Retrospectively, two hundred consecutive patients referred for a brain CTA and two hundred patients that had been subject for endovascular thrombectomy, with an accessible preceding CTA, were assessed for large vessel occlusions (LVO) using the CINA® LVO software. The patients were sub-grouped by occlusion site. The original radiology report was used as ground truth and cases with disagreement were reassessed. Two-by-two tables were created and measures for LVO detection were calculated. Results A total of four-hundred patients were included; 221 LVOs were present in 215 patients (54 %). The overall specificity was high for LVOs in the anterior circulation (93 %). The overall sensitivity for LVOs in the anterior circulation was 54 % with the highest sensitivity for the M1 segment of the middle cerebral artery (87 %) and T-type internal carotid occlusions (84 %). The sensitivity was low for occlusions in the M2 segment of the middle cerebral artery (13 % and 0 % for proximal and distal M2 occlusions respectively) and in posterior circulation occlusions (0 %, not included in the intended use of the software). Conclusions LVO detection sensitivity for the CINA® LVO software differs largely depending on the location of the occlusion, with low sensitivity for detection of some LVOs potentially eligible for mechanical thrombectomy. Further development of the software to increase sensitivity to all LVO locations would increase the clinical usefulness.
Collapse
Affiliation(s)
- Helena Mellander
- Diagnostic Radiology, Department of Neuroradiology and Odontology, Center for Medical Imaging and Physiology, Skåne University Hospital, Lund, Sweden
- Department of Clinical Sciences, Lund University, Lund, Sweden
| | - Amir Hillal
- Diagnostic Radiology, Department of Neuroradiology and Odontology, Center for Medical Imaging and Physiology, Skåne University Hospital, Lund, Sweden
- Department of Clinical Sciences, Lund University, Lund, Sweden
| | - Teresa Ullberg
- Diagnostic Radiology, Department of Neuroradiology and Odontology, Center for Medical Imaging and Physiology, Skåne University Hospital, Lund, Sweden
- Department of Clinical Sciences, Lund University, Lund, Sweden
| | - Johan Wassélius
- Diagnostic Radiology, Department of Neuroradiology and Odontology, Center for Medical Imaging and Physiology, Skåne University Hospital, Lund, Sweden
- Department of Clinical Sciences, Lund University, Lund, Sweden
| |
Collapse
|
4
|
Grippaudo F, Nigrelli S, Patrignani A, Ribuffo D. Quality of the Information provided by ChatGPT for Patients in Breast Plastic Surgery: Are we already in the future? JPRAS Open 2024; 40:99-105. [PMID: 38444627 PMCID: PMC10914413 DOI: 10.1016/j.jpra.2024.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 02/04/2024] [Indexed: 03/07/2024] Open
Abstract
Introduction In recent years, artificial intelligence (AI) has gained popularity, even in the field of plastic surgery. It is increasingly common for patients to use the internet to gather information about plastic surgery, and AI-based chatbots, such as ChatGPT, could be employed to answer patients' questions.The aim of this study was to evaluate the quality of medical information provided by ChatGPT regarding three of the most common procedures in breast plastic surgery: breast reconstruction, breast reduction, and augmentation mammaplasty. Methods The quality of information was evaluated through the expanded EQIP scale. Responses were collected from a pool made by ten resident doctors in plastic surgery and then processed by SPSS software ver. 28.0. Results The analysis of the contents provided by ChatGPT revealed sufficient quality of information across all selected topics, with a high bias in terms of distribution of the score between the different items. There was a critical lack in the "Information data field" (0/6 score in all the 3 investigations) but a very high overall evaluation concerning the "Structure data" (>7/11 in all the 3 investigations). Conclusion Currently, AI serves as a valuable tool for patients; however, engineers and developers must address certain critical issues. It is possible that models like ChatGPT will play an important role in improving patient's consciousness about medical procedures and surgical interventions in the future, but their role must be considered ancillary to that of surgeons.
Collapse
Affiliation(s)
- F.R. Grippaudo
- Department of Plastic Reconstructive and Aesthetic Surgery, Policlinico Umberto I, Sapienza University of Rome, Viale del Policlinico 155, 00161, Rome, Italy
| | - S. Nigrelli
- Department of Plastic Reconstructive and Aesthetic Surgery, Policlinico Umberto I, Sapienza University of Rome, Viale del Policlinico 155, 00161, Rome, Italy
| | - A. Patrignani
- Department of Plastic Reconstructive and Aesthetic Surgery, Policlinico Umberto I, Sapienza University of Rome, Viale del Policlinico 155, 00161, Rome, Italy
| | - D. Ribuffo
- Department of Plastic Reconstructive and Aesthetic Surgery, Policlinico Umberto I, Sapienza University of Rome, Viale del Policlinico 155, 00161, Rome, Italy
| |
Collapse
|
5
|
Jitkajornwanich K, Vijaranakul N, Jaiyen S, Srestasathiern P, Lawawirojwong S. Enhancing risk communication and environmental crisis management through satellite imagery and AI for air quality index estimation. MethodsX 2024; 12:102611. [PMID: 38420115 PMCID: PMC10901142 DOI: 10.1016/j.mex.2024.102611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 02/10/2024] [Indexed: 03/02/2024] Open
Abstract
Due to climate change, the air pollution problem has become more and more prominent [23]. Air pollution has impacts on people globally, and is considered one of the leading risk factors for premature death worldwide; it was ranked as number 4 according to the website [24]. A study, 'The Global Burden of Disease,' reported 4,506,193 deaths were caused by outdoor air pollution in 2019 [22,25]. The air pollution problem is become even more apparent when it comes to developing countries [22], including Thailand, which is considered one of the developing countries [26]. In this research, we focus and analyze the air pollution in Thailand, which has the annual average PM2.5 (particulate matter 2.5) concentration falls in between 15 and 25, classified as the interim target 2 by 2021's WHO AQG (World Health Organization's Air Quality Guidelines) [27]. (The interim targets refer to areas where the air pollutants concentration is high, with 1 being the highest concentration and decreasing down to 4 [27,28]). However, the methodology proposed here can also be adopted in other areas as well. During the winter in Thailand, Bangkok and its surrounding metroplex have been facing the issue of air pollution (e.g., PM2.5) every year. Currently, air quality measurement is done by simply implementing physical air quality measurement devices at designated-but limited number of locations. In this work, we propose a method that allows us to estimate the Air Quality Index (AQI) on a larger scale by utilizing Landsat 8 images with machine learning techniques. We propose and compare hybrid models with pure regression models to enhance AQI prediction based on satellite images. Our hybrid model consists of two parts as follows:•The classification part and the estimation part, whereas the pure regressor model consists of only one part, which is a pure regression model for AQI estimation.•The two parts of the hybrid model work hand in hand such that the classification part classifies data points into each class of air quality standard, which is then passed to the estimation part to estimate the final AQI. From our experiments, after considering all factors and comparing their performances, we conclude that the hybrid model has a slightly better performance than the pure regressor model, although both models can achieve a generally minimum R2 (R2 > 0.7). We also introduced and tested an additional factor, DOY (day of year), and incorporated it into our model. Additional experiments with similar approaches are also performed and compared. And, the results also show that our hybrid model outperform them. Keywords: climate change, air pollution, air quality assessment, air quality index, AQI, machine learning, AI, Landsat 8, satellite imagery analysis, environmental data analysis, natural disaster monitoring and management, crisis and disaster management and communication.
Collapse
Affiliation(s)
- Kulsawasd Jitkajornwanich
- Department of Computer Science, School of Science, King Mongkut's Institute of Technology Ladkrabang (KMITL), Bangkok 10520, Thailand
| | - Nattadet Vijaranakul
- College of Media and Communication, Texas Tech University, Lubbock, TX 79409, USA
| | - Saichon Jaiyen
- School of Information Technology, King Mongkut's University of Technology Thonburi (KMUTT), Bangkok 10140, Thailand
| | - Panu Srestasathiern
- Geo-Informatics and Space Technology Development Agency, GISTDA (Public Organization), Bangkok 10210, Thailand
| | - Siam Lawawirojwong
- Geo-Informatics and Space Technology Development Agency, GISTDA (Public Organization), Bangkok 10210, Thailand
| |
Collapse
|
6
|
Chen D, Yu MQ, Li QJ, He X, Liu F, Shen JF. Precise tooth design using deep learning-based templates. J Dent 2024; 144:104971. [PMID: 38548165 DOI: 10.1016/j.jdent.2024.104971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 03/06/2024] [Accepted: 03/24/2024] [Indexed: 04/01/2024] Open
Abstract
OBJECTIVES In prosthodontic procedures, traditional computer-aided design (CAD) is often time-consuming and lacks accuracy in shape restoration. In this study, we combined implicit template and deep learning (DL) to construct a precise neural network for personalized tooth defect restoration. METHODS Ninety models of right maxillary central incisor (80 for training, 10 for validation) were collected. A DL model named ToothDIT was trained to establish an implicit template and a neural network capable of predicting unique identifications. In the validation stage, teeth in validation set were processed into corner, incisive, and medium defects. The defective teeth were inputted into ToothDIT to predict the unique identification, which actuated the deformation of the implicit template to generate the highly customized template (DIT) for the target tooth. Morphological restorations were executed with templates from template shape library (TSL), average tooth template (ATT), and DIT in Exocad (GmbH, Germany). RMSestimate, width, length, aspect ratio, incisal edge curvature, incisive end retraction, and guiding inclination were introduced to assess the restorative accuracy. Statistical analysis was conducted using two-way ANOVA and paired t-test for overall and detailed differences. RESULTS DIT displayed significantly smaller RMSestimate than TSL and ATT. In 2D detailed analysis, DIT exhibited significantly less deviations from the natural teeth compared to TSL and ATT. CONCLUSION The proposed DL model successfully reconstructed the morphology of anterior teeth with various degrees of defects and achieved satisfactory accuracy. This approach provides a more reliable reference for prostheses design, resulting in enhanced accuracy in morphological restoration. CLINICAL SIGNIFICANCE This DL model holds promise in assisting dentists and technicians in obtaining morphology templates that closely resemble the original shape of the defective teeth. These customized templates serve as a foundation for enhancing the efficiency and precision of digital restorative design for defective teeth.
Collapse
Affiliation(s)
- Du Chen
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, National Center for Stomatology, West China School of Stomatology, Sichuan University, Chengdu 610041, PR China; Department of Prosthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, PR China
| | - Mei-Qi Yu
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, National Center for Stomatology, West China School of Stomatology, Sichuan University, Chengdu 610041, PR China; Department of Prosthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, PR China
| | - Qi-Jing Li
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, National Center for Stomatology, West China School of Stomatology, Sichuan University, Chengdu 610041, PR China; Department of Prosthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, PR China
| | - Xiang He
- College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Fei Liu
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, National Center for Stomatology, West China School of Stomatology, Sichuan University, Chengdu 610041, PR China; Department of Prosthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, PR China.
| | - Jie-Fei Shen
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, National Center for Stomatology, West China School of Stomatology, Sichuan University, Chengdu 610041, PR China; Department of Prosthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, PR China.
| |
Collapse
|
7
|
Kim H, Kim K, Oh SJ, Lee S, Woo JH, Kim JH, Cha YK, Kim K, Chung MJ. AI-assisted Analysis to Facilitate Detection of Humeral Lesions on Chest Radiographs. Radiol Artif Intell 2024; 6:e230094. [PMID: 38446041 DOI: 10.1148/ryai.230094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2024]
Abstract
Purpose To develop an artificial intelligence (AI) system for humeral tumor detection on chest radiographs (CRs) and evaluate the impact on reader performance. Materials and Methods In this retrospective study, 14 709 CRs (January 2000 to December 2021) were collected from 13 468 patients, including CT-proven normal (n = 13 116) and humeral tumor (n = 1593) cases. The data were divided into training and test groups. A novel training method called false-positive activation area reduction (FPAR) was introduced to enhance the diagnostic performance by focusing on the humeral region. The AI program and 10 radiologists were assessed using holdout test set 1, wherein the radiologists were tested twice (with and without AI test results). The performance of the AI system was evaluated using holdout test set 2, comprising 10 497 normal images. Receiver operating characteristic analyses were conducted for evaluating model performance. Results FPAR application in the AI program improved its performance compared with a conventional model based on the area under the receiver operating characteristic curve (0.87 vs 0.82, P = .04). The proposed AI system also demonstrated improved tumor localization accuracy (80% vs 57%, P < .001). In holdout test set 2, the proposed AI system exhibited a false-positive rate of 2%. AI assistance improved the radiologists' sensitivity, specificity, and accuracy by 8.9%, 1.2%, and 3.5%, respectively (P < .05 for all). Conclusion The proposed AI tool incorporating FPAR improved humeral tumor detection on CRs and reduced false-positive results in tumor visualization. It may serve as a supportive diagnostic tool to alert radiologists about humeral abnormalities. Keywords: Artificial Intelligence, Conventional Radiography, Humerus, Machine Learning, Shoulder, Tumor Supplemental material is available for this article. © RSNA, 2024.
Collapse
Affiliation(s)
- Harim Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Kyungsu Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Seong Je Oh
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Sungjoo Lee
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Jung Han Woo
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Jong Hee Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Yoon Ki Cha
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Kyunga Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Myung Jin Chung
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| |
Collapse
|
8
|
Rifino N, Bersano A, Padovani A, Conti GM, Cavallini A, Colombo L, Priori A, Pianese R, Gammone MR, Erbetta A, Ciceri EF, Sattin D, Varvello R, Parati EA, Scelzo E. Virtual hospital and artificial intelligence: a first step towards the application of an innovative health system for the care of rare cerebrovascular diseases. Neurol Sci 2024; 45:2087-2095. [PMID: 38017154 DOI: 10.1007/s10072-023-07206-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 11/14/2023] [Indexed: 11/30/2023]
Abstract
The development of virtual care options, including virtual hospital platforms, is rapidly changing the healthcare, mostly in the pandemic period, due to difficulties in in-person consultations. For this purpose, in 2020, a neurological Virtual Hospital (NOVHO) pilot study has been implemented, in order to experiment a multidisciplinary second opinion evaluation system for neurological diseases. Cerebrovascular diseases represent a preponderant part of neurological disorders. However, more than 30% of strokes remain of undetermined source, and rare CVD (rCVD) are often misdiagnosed. The lack of data on phenotype and clinical course of rCVD patients makes the diagnosis and the development of therapies challenging. Since the diagnosis and care of rCVDs require adequate expertise and instrumental tools, their management is mostly allocated to a few experienced hospitals, making difficult equity in access to care. Therefore, strategies for virtual consultations are increasingly applied with some advantage for patient management also in peripheral areas. Moreover, health data are becoming increasingly complex and require new technologies to be managed. The use of Artificial Intelligence is beginning to be applied to the healthcare system and together with the Internet of Things will enable the creation of virtual models with predictive abilities, bringing healthcare one step closer to personalized medicine. Herein, we will report on the preliminary results of the NOVHO project and present the methodology of a new project aimed at developing an innovative multidisciplinary and multicentre virtual care model, specific for rCVD (NOVHO-rCVD), which combines the virtual hospital approach and the deep-learning machine system.
Collapse
Affiliation(s)
- Nicola Rifino
- Cerebrovascular Unit, Fondazione IRCCS Istituto Neurologico Carlo Besta, Via Celoria 11, 20133, Milan, Italy.
| | - Anna Bersano
- Cerebrovascular Unit, Fondazione IRCCS Istituto Neurologico Carlo Besta, Via Celoria 11, 20133, Milan, Italy
| | - Alessandro Padovani
- Department of Clinical and Experimental Sciences, Neurology Clinic, University of Brescia, Brescia, Italy
| | - Giancarlo Maria Conti
- Department of Neurology, ASST Nord Milano, Ospedale Bassini, Cinisello Balsamo, Italy
| | - Anna Cavallini
- Cerebrovascular Disease and Stroke Unit, IRCCS Fondazione Mondino, Pavia, Italy
| | | | - Alberto Priori
- Department of Neurology, Ospedale San Paolo, Milan, Italy
| | - Raffaella Pianese
- S.I.T.R.A, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | | | - Alessandra Erbetta
- Service of Neuroradiology, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | - Elisa Francesca Ciceri
- Diagnostic Radiology and Interventional Neuroradiology Unit, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | - Davide Sattin
- Istituti Clinici Scientifici Maugeri IRCCS Via Camaldoli 64, 20138, Milan, Italy
| | | | | | - Emma Scelzo
- Department of Neurology, Ospedale San Paolo, Milan, Italy
| |
Collapse
|
9
|
Villaizán-Vallelado M, Salvatori M, Carro B, Sanchez-Esguevillas AJ. Graph Neural Network contextual embedding for Deep Learning on tabular data. Neural Netw 2024; 173:106180. [PMID: 38447303 DOI: 10.1016/j.neunet.2024.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 01/29/2024] [Accepted: 02/13/2024] [Indexed: 03/08/2024]
Abstract
All industries are trying to leverage Artificial Intelligence (AI) based on their existing big data which is available in so called tabular form, where each record is composed of a number of heterogeneous continuous and categorical columns also known as features. Deep Learning (DL) has constituted a major breakthrough for AI in fields related to human skills like natural language processing, but its applicability to tabular data has been more challenging. More classical Machine Learning (ML) models like tree-based ensemble ones usually perform better. This paper presents a novel DL model using Graph Neural Network (GNN) more specifically Interaction Network (IN), for contextual embedding and modeling interactions among tabular features. Its results outperform those of a recently published survey with DL benchmark based on seven public datasets, also achieving competitive results when compared to boosted-tree solutions.
Collapse
Affiliation(s)
- Mario Villaizán-Vallelado
- Artificial Intelligence Laboratory (AI-Lab), Telefonica I+D, Spain; Universidad de Valladolid, Valladolid, 47011, Spain.
| | - Matteo Salvatori
- Artificial Intelligence Laboratory (AI-Lab), Telefonica I+D, Spain.
| | - Belén Carro
- Universidad de Valladolid, Valladolid, 47011, Spain.
| | | |
Collapse
|
10
|
Contino S, Cruciata L, Gambino O, Pirrone R. IODeep: An IOD for the introduction of deep learning in the DICOM standard. Comput Methods Programs Biomed 2024; 248:108113. [PMID: 38479148 DOI: 10.1016/j.cmpb.2024.108113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 02/22/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND AND OBJECTIVE In recent years, Artificial Intelligence (AI) and in particular Deep Neural Networks (DNN) became a relevant research topic in biomedical image segmentation due to the availability of more and more data sets along with the establishment of well known competitions. Despite the popularity of DNN based segmentation on the research side, these techniques are almost unused in the daily clinical practice even if they could support effectively the physician during the diagnostic process. Apart from the issues related to the explainability of the predictions of a neural model, such systems are not integrated in the diagnostic workflow, and a standardization of their use is needed to achieve this goal. METHODS This paper presents IODeep a new DICOM Information Object Definition (IOD) aimed at storing both the weights and the architecture of a DNN already trained on a particular image dataset that is labeled as regards the acquisition modality, the anatomical region, and the disease under investigation. RESULTS The IOD architecture is presented along with a DNN selection algorithm from the PACS server based on the labels outlined above, and a simple PACS viewer purposely designed for demonstrating the effectiveness of the DICOM integration, while no modifications are required on the PACS server side. Also a service based architecture in support of the entire workflow has been implemented. CONCLUSION IODeep ensures full integration of a trained AI model in a DICOM infrastructure, and it is also enables a scenario where a trained model can be either fine-tuned with hospital data or trained in a federated learning scheme shared by different hospitals. In this way AI models can be tailored to the real data produced by a Radiology ward thus improving the physician decision making process. Source code is freely available at https://github.com/CHILab1/IODeep.git.
Collapse
Affiliation(s)
- Salvatore Contino
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy
| | - Luca Cruciata
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy
| | - Orazio Gambino
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy.
| | - Roberto Pirrone
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy
| |
Collapse
|
11
|
Kitamura FC, Prevedello LM, Colak E, Halabi SS, Lungren MP, Ball RL, Kalpathy-Cramer J, Kahn CE, Richards T, Talbott JF, Shih G, Lin HM, Andriole KP, Vazirabad M, Erickson BJ, Flanders AE, Mongan J. Lessons Learned in Building Expertly Annotated Multi-Institution Datasets and Hosting the RSNA AI Challenges. Radiol Artif Intell 2024; 6:e230227. [PMID: 38477659 DOI: 10.1148/ryai.230227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
The Radiological Society of North America (RSNA) has held artificial intelligence competitions to tackle real-world medical imaging problems at least annually since 2017. This article examines the challenges and processes involved in organizing these competitions, with a specific emphasis on the creation and curation of high-quality datasets. The collection of diverse and representative medical imaging data involves dealing with issues of patient privacy and data security. Furthermore, ensuring quality and consistency in data, which includes expert labeling and accounting for various patient and imaging characteristics, necessitates substantial planning and resources. Overcoming these obstacles requires meticulous project management and adherence to strict timelines. The article also highlights the potential of crowdsourced annotation to progress medical imaging research. Through the RSNA competitions, an effective global engagement has been realized, resulting in innovative solutions to complex medical imaging problems, thus potentially transforming health care by enhancing diagnostic accuracy and patient outcomes. Keywords: Use of AI in Education, Artificial Intelligence © RSNA, 2024.
Collapse
Affiliation(s)
- Felipe C Kitamura
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Luciano M Prevedello
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Errol Colak
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Safwan S Halabi
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Matthew P Lungren
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Robyn L Ball
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Jayashree Kalpathy-Cramer
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Charles E Kahn
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Tyler Richards
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Jason F Talbott
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - George Shih
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Hui Ming Lin
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Katherine P Andriole
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Maryam Vazirabad
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Bradley J Erickson
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - Adam E Flanders
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| | - John Mongan
- From the Department of Applied Innovation and AI, Dasa, São Paulo, Brazil (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São Paulo (Unifesp), Av Prof Ascendino Reis, 1245, 131, São Paulo, SP, Brazil 04027-000 (F.C.K.); Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio (L.M.P.); Department of Medical Imaging, University of Toronto, Toronto, Canada (E.C.); Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (S.S.H.); Microsoft HLS, Redmond, Wash (M.P.L.); Department of Biomedical Data Science, Stanford University, Stanford, Calif (M.P.L.); The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Department of Ophthalmology, University of Colorado Denver School of Medicine, Aurora, Colo (J.K.C.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (C.E.K.); Department of Radiology, University of Utah, Salt Lake City, Utah (T.R.); Department of Radiology and Biomedical Imaging (M.P.L., J.F.T., J.M.) and Center for Intelligent Imaging (J.M.), University of California San Francisco, San Francisco, Calif; Department of Radiology, Weill Cornell Medical College, New York, NY (G.S.); Department of Medical Imaging, Unity Health Toronto, Toronto, Canada (H.M.L.); Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, MGB Data Science Office, Boston, Mass (K.P.A.); Informatics Department, Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); and Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.)
| |
Collapse
|
12
|
Martins A, Londral A, L Nunes I, V Lapão L. Unlocking human-like conversations: Scoping review of automation techniques for personalized healthcare interventions using conversational agents. Int J Med Inform 2024; 185:105385. [PMID: 38428201 DOI: 10.1016/j.ijmedinf.2024.105385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 02/12/2024] [Accepted: 02/19/2024] [Indexed: 03/03/2024]
Abstract
BACKGROUND Conversational agents (CAs) offer a sustainable approach to deliver personalized interventions and improve health outcomes. OBJECTIVES To review how human-like communication and automation techniques of CAs in personalized healthcare interventions have been implemented. It is intended for designers and developers, computational scientists, behavior scientists, and biomedical engineers who aim at developing CAs for healthcare interventions. METHODOLOGY A scoping review was conducted in accordance with PRISMA Extension for Scoping Review. A search was performed in May 2023 in Web of Science, Pubmed, Scopus and IEEE databases. Search results were extracted, duplicates removed, and the remaining results were screened. Studies that contained personalized and automated CAs within the healthcare domain were included. Information regarding study characterization, and human-like communication and automation techniques was extracted from articles that met the eligibility criteria. RESULTS Twenty-three studies were selected. These articles described the development of CAs designed for patients to either self-manage their diseases (such as diabetes, mental health issues, cancer, asthma, COVID-19, and other chronic conditions) or to enhance healthy habits. The human-like communication characteristics studied encompassed aspects like system flexibility, personalization, and affective characteristics. Seven studies used rule-based models, eleven applied retrieval-based techniques for content delivery, five used AI models, and six integrated affective computing. CONCLUSIONS The increasing interest in employing CAs for personalized healthcare interventions is noteworthy. The adaptability of dialogue structures and personalization features is still limited. Unlocking human-like conversations may encompass the use of affective computing and generative AI to help improve user engagement. Future research should focus on the integration of holistic methods to describe the end-user, and the safe use of generative models.
Collapse
Affiliation(s)
- Ana Martins
- Value for Health CoLAB, Lisboa 1150-190, Portugal; UNIDEMI, Department of Mechanical and Industrial Engineering, Nova School of Science and Technology, Caparica 2829-516, Portugal.
| | - Ana Londral
- Value for Health CoLAB, Lisboa 1150-190, Portugal; Comprehensive Health Research Center, Nova Medical School, Lisboa 1169-056, Portugal; Department of Physics, Nova School of Science and Technology, Caparica 2829-516, Portugal
| | - Isabel L Nunes
- UNIDEMI, Department of Mechanical and Industrial Engineering, Nova School of Science and Technology, Caparica 2829-516, Portugal; Laboratório Associado de Sistemas Inteligentes, Escola de Engenharia Universidade do Minho, Campus Azurém, 4800-058 Guimarães, Portugal
| | - Luís V Lapão
- UNIDEMI, Department of Mechanical and Industrial Engineering, Nova School of Science and Technology, Caparica 2829-516, Portugal; Laboratório Associado de Sistemas Inteligentes, Escola de Engenharia Universidade do Minho, Campus Azurém, 4800-058 Guimarães, Portugal; Comprehensive Health Research Center, Nova Medical School, Lisboa 1169-056, Portugal
| |
Collapse
|
13
|
Yari A, Fasih P, Hosseini Hooshiar M, Goodarzi A, Fattahi SF. Detection and classification of mandibular fractures in panoramic radiography using artificial intelligence. Dentomaxillofac Radiol 2024:twae018. [PMID: 38652576 DOI: 10.1093/dmfr/twae018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/11/2024] [Accepted: 04/19/2024] [Indexed: 04/25/2024] Open
Abstract
PURPOSE This study aimed to assess the performance of a deep learning algorithm (YOLOv5) in detecting different mandibular fracture types in panoramic images. METHODS This study utilized a dataset of panoramic radiographic images with mandibular fractures. The dataset was divided into training, validation, and testing sets, with 60%, 20%, and 20% of the images, respectively. An equal number of control panoramic radiographs, which did not contain any fractures, were also randomly distributed among the three sets. The YOLOv5 deep learning model was trained to detect six fracture types in the mandible based on the anatomical location including symphysis, body, angle, ramus, condylar neck, and condylar head. Performance metrics of accuracy, precision, sensitivity (recall), specificity, dice coefficient (F1 score), and area under the curve (AUC) were calculated for each class. RESULTS A total of 498 panoramic images containing 673 fractures were collected. The accuracy was highest in detecting body (96.21%) and symphysis (95.87%), and was lowest in angle (90.51%) fractures. The highest and lowest precision values were observed in detecting symphysis (95.45%) and condylar head (63.16%) fractures, respectively. The sensitivity was highest in the body (96.67%) fractures and was lowest in the condylar head (80.00%) and condylar neck (81.25%) fractures. The highest specificity was noted in symphysis (98.96%), body (96.08%), and ramus (96.04%) fractures, respectively. The dice coefficient and AUC were highest in detecting body fractures (0.921 and 0.942, respectively), and were lowest in detecting condylar head fractures (0.706 and .812, respectively). CONCLUSION The trained algorithm achieved promising performance metrics for the automated detection of most fracture types, with the highest performance observed in detecting body and symphysis fractures. Machine learning can provide a potential tool for assisting clinicians in mandibular fracture diagnosis.
Collapse
Affiliation(s)
- Amir Yari
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Kashan University of Medical Sciences, Kashan, Iran
| | - Paniz Fasih
- Department of Prosthodontics, School of Dentistry, Kashan University of Medical Sciences, Kashan, Iran
| | - Mohammad Hosseini Hooshiar
- Dental Research Center, Dentistry Research Institute, Tehran University of Medical Sciences & Department of Periodontology, School of Dentistry, Tehran University of Medical Sciences, Tehran, Iran
| | - Ali Goodarzi
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Shiraz University of Medical Sciences, Shiraz, Iran
| | | |
Collapse
|
14
|
Jeong D, Jung S, Yoon YE, Jeon J, Jang Y, Ha S, Hong Y, Cho J, Lee SA, Choi HM, Chang HJ. Artificial intelligence-enhanced automation for M-mode echocardiographic analysis: ensuring fully automated, reliable, and reproducible measurements. Int J Cardiovasc Imaging 2024:10.1007/s10554-024-03095-x. [PMID: 38652399 DOI: 10.1007/s10554-024-03095-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 03/25/2024] [Indexed: 04/25/2024]
Abstract
To enhance M-mode echocardiography's utility for measuring cardiac structures, we developed and evaluated an artificial intelligence (AI)-based automated analysis system for M-mode images through the aorta and left atrium [M-mode (Ao-LA)], and through the left ventricle [M-mode (LV)]. Our system, integrating two deep neural networks (DNN) for view classification and image segmentation, alongside an auto-measurement algorithm, was developed using 5,958 M-mode images [3,258 M-mode (LA-Ao), and 2,700 M-mode (LV)] drawn from a nationwide echocardiographic dataset collated from five tertiary hospitals. The performance of view classification and segmentation DNNs were evaluated on 594 M-mode images, while automatic measurement accuracy was tested on separate internal test set with 100 M-mode images as well as external test set with 280 images (140 sinus rhythm and 140 atrial fibrillation). Performance evaluation showed the view classification DNN's overall accuracy of 99.8% and segmentation DNN's Dice similarity coefficient of 94.3%. Within the internal test set, all automated measurements, including LA, Ao, and LV wall and cavity, resonated strongly with expert evaluations, exhibiting Pearson's correlation coefficients (PCCs) of 0.81-0.99. This performance persisted in the external test set for both sinus rhythm (PCC, 0.84-0.98) and atrial fibrillation (PCC, 0.70-0.97). Notably, automatic measurements, consistently offering multi-cardiac cycle readings, showcased a stronger correlation with the averaged multi-cycle manual measurements than with those of a single representative cycle. Our AI-based system for automatic M-mode echocardiographic analysis demonstrated excellent accuracy, reproducibility, and speed. This automated approach has the potential to improve efficiency and reduce variability in clinical practice.
Collapse
Affiliation(s)
- Dawun Jeong
- Department of Internal Medicine, Graduate School of Medical Science, Brain Korea 21 Project, Yonsei University College of Medicine, Seoul, South Korea
- CONNECT-AI Research Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Sunghee Jung
- CONNECT-AI Research Center, Yonsei University College of Medicine, Seoul, South Korea
- Ontact Health Inc, Seoul, South Korea
| | - Yeonyee E Yoon
- Ontact Health Inc, Seoul, South Korea.
- Cardiovascular Center and Division of Cardiology, Department of Internal Medicine, Seoul National University Bundang Hospital, Gumi-Ro 173, Bundang-Gu, Seongnam, Gyeonggi-Do, 13620, South Korea.
- Department of Internal Medicine, Seoul National University College of Medicine, Seoul, South Korea.
| | | | | | - Seongmin Ha
- CONNECT-AI Research Center, Yonsei University College of Medicine, Seoul, South Korea
- Ontact Health Inc, Seoul, South Korea
- Graduate School of Biomedical Engineering, Yonsei University College of Medicine, Seoul, South Korea
| | - Youngtaek Hong
- CONNECT-AI Research Center, Yonsei University College of Medicine, Seoul, South Korea
- Ontact Health Inc, Seoul, South Korea
| | | | | | - Hong-Mi Choi
- Cardiovascular Center and Division of Cardiology, Department of Internal Medicine, Seoul National University Bundang Hospital, Gumi-Ro 173, Bundang-Gu, Seongnam, Gyeonggi-Do, 13620, South Korea
- Department of Internal Medicine, Seoul National University College of Medicine, Seoul, South Korea
| | - Hyuk-Jae Chang
- CONNECT-AI Research Center, Yonsei University College of Medicine, Seoul, South Korea
- Ontact Health Inc, Seoul, South Korea
- Division of Cardiology, Severance Cardiovascular Hospital, Yonsei University College of Medicine, Yonsei University Health System, Seoul, South Korea
| |
Collapse
|
15
|
Sahu A, Kandaswamy S, Singh DV, Thyagarajan E, Parthasarathy AK, Naganna S, Dastidar TR. AI Driven Lab-on-chip Cartridge for Automated Urinalysis. SLAS Technol 2024:100137. [PMID: 38657705 DOI: 10.1016/j.slast.2024.100137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 04/05/2024] [Accepted: 04/22/2024] [Indexed: 04/26/2024]
Abstract
After haematology, urinalysis is the most common biological test performed in clinical settings. Hence, simplified workflow and automated analysis of urine elements are of absolute necessities. In the present work, a novel lab-on-chip cartridge (Gravity Sedimentation Cartridge) for the auto analysis of urine elements is developed. The GSC consists of a capillary chamber that uptakes a raw urine sample by capillary force and performs particles and cells enrichment within 5 min through a gravity sedimentation process for the microscopic examination. Centrifugation, which is necessary for enrichment in the conventional method, was circumvented in this approach. The AI100 device (Image based autoanalyzer) captures microscopic images from the cartridge at 40x magnification and uploads them into the cloud. Further, these images were auto-analyzed using an AI-based object detection model, which delivers the reports. These reports were available for expert review on a web-based platform that enables evidence-based tele reporting. A comparative analysis was carried out for various analytical parameters of the data generated through GSC (manual microscopy, tele reporting, and AI model) with the gold standard method. The presented approach makes it a viable product for automated urinalysis in point-of-care and large-scale settings.
Collapse
Affiliation(s)
- Avinash Sahu
- SigTuple Technologies Pvt. Ltd, Bengaluru, Karnataka 560102, India
| | | | | | | | | | - Sharitha Naganna
- SigTuple Technologies Pvt. Ltd, Bengaluru, Karnataka 560102, India
| | | |
Collapse
|
16
|
Payne DL, Purohit K, Borrero WM, Chung K, Hao M, Mpoy M, Jin M, Prasanna P, Hill V. Performance of GPT-4 on the American College of Radiology In-training Examination: Evaluating Accuracy, Model Drift, and Fine-tuning. Acad Radiol 2024:S1076-6332(24)00213-7. [PMID: 38653599 DOI: 10.1016/j.acra.2024.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 04/01/2024] [Accepted: 04/06/2024] [Indexed: 04/25/2024]
Abstract
RATIONALE AND OBJECTIVES In our study, we evaluate GPT-4's performance on the American College of Radiology (ACR) 2022 Diagnostic Radiology In-Training Examination (DXIT). We perform multiple experiments across time points to assess for model drift, as well as after fine-tuning to assess for differences in accuracy. MATERIALS AND METHODS Questions were sequentially input into GPT-4 with a standardized prompt. Each answer was recorded and overall accuracy was calculated, as was logic-adjusted accuracy, and accuracy on image-based questions. This experiment was repeated several months later to assess for model drift, then again after the performance of fine-tuning to assess for changes in GPT's performance. RESULTS GPT-4 achieved 58.5% overall accuracy, lower than the PGY-3 average (61.9%) but higher than the PGY-2 average (52.8%). Adjusted accuracy was 52.8%. GPT-4 showed significantly higher (p = 0.012) confidence for correct answers (87.1%) compared to incorrect (84.0%). Performance on image-based questions was significantly poorer (p < 0.001) at 45.4% compared to text-only questions (80.0%), with adjusted accuracy for image-based questions of 36.4%. When the questions were repeated, GPT-4 chose a different answer 25.5% of the time and there was no change in accuracy. Fine-tuning did not improve accuracy. CONCLUSION GPT-4 performed between PGY-2 and PGY-3 levels on the 2022 DXIT, significantly poorer on image-based questions, and with large variability in answer choices across time points. Exploratory experiments in fine-tuning did not improve performance. This study underscores the potential and risks of using minimally-prompted general AI models in interpreting radiologic images as a diagnostic tool. Implementers of general AI radiology systems should exercise caution given the possibility of spurious yet confident responses.
Collapse
Affiliation(s)
- David L Payne
- Stony Brook University Hospital Department of Radiology, 101 Nicolls Road, Stony Brook, New York 11794, USA (D.L.P., K.P., W.M.B., K.C., M.H., M.M., M.J.); Stony Brook University Department of Biomedical Informatics, 1 Lauterbur Drive, Stony Brook, New York 11794, USA (D.L.P., P.P.).
| | - Kush Purohit
- Stony Brook University Hospital Department of Radiology, 101 Nicolls Road, Stony Brook, New York 11794, USA (D.L.P., K.P., W.M.B., K.C., M.H., M.M., M.J.)
| | - Walter Morales Borrero
- Stony Brook University Hospital Department of Radiology, 101 Nicolls Road, Stony Brook, New York 11794, USA (D.L.P., K.P., W.M.B., K.C., M.H., M.M., M.J.)
| | - Katherine Chung
- Stony Brook University Hospital Department of Radiology, 101 Nicolls Road, Stony Brook, New York 11794, USA (D.L.P., K.P., W.M.B., K.C., M.H., M.M., M.J.)
| | - Max Hao
- Stony Brook University Hospital Department of Radiology, 101 Nicolls Road, Stony Brook, New York 11794, USA (D.L.P., K.P., W.M.B., K.C., M.H., M.M., M.J.)
| | - Mutshipay Mpoy
- Stony Brook University Hospital Department of Radiology, 101 Nicolls Road, Stony Brook, New York 11794, USA (D.L.P., K.P., W.M.B., K.C., M.H., M.M., M.J.)
| | - Michael Jin
- Stony Brook University Hospital Department of Radiology, 101 Nicolls Road, Stony Brook, New York 11794, USA (D.L.P., K.P., W.M.B., K.C., M.H., M.M., M.J.)
| | - Prateek Prasanna
- Stony Brook University Department of Biomedical Informatics, 1 Lauterbur Drive, Stony Brook, New York 11794, USA (D.L.P., P.P.)
| | - Virginia Hill
- Northwestern University Feinberg School of Medicine Department of Radiology, 676 North Clair Street, Chicago, Illinois 60611, USA (V.H.)
| |
Collapse
|
17
|
Maki JH, Patel NU, Ulrich EJ, Dhaouadi J, Jones RW. Part I: prostate cancer detection, artificial intelligence for prostate cancer and how we measure diagnostic performance: a comprehensive review. Curr Probl Diagn Radiol 2024:S0363-0188(24)00072-0. [PMID: 38658286 DOI: 10.1067/j.cpradiol.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 03/14/2024] [Accepted: 04/18/2024] [Indexed: 04/26/2024]
Abstract
MRI has firmly established itself as a mainstay for the detection, staging and surveillance of prostate cancer. Despite its success, prostate MRI continues to suffer from poor inter-reader variability and a low positive predictive value. The recent emergence of Artificial Intelligence (AI) to potentially improve diagnostic performance shows great potential. Understanding and interpreting the AI landscape as well as ever-increasing research literature, however, is difficult. This is in part due to widely varying study design and reporting techniques. This paper aims to address this need by first outlining the different types of AI used for the detection and diagnosis of prostate cancer, next deciphering how data collection methods, statistical analysis metrics (such as ROC and FROC analysis) and end points/outcomes (lesion detection vs. case diagnosis) affect the performance and limit the ability to compare between studies. Finally, this work explores the need for appropriately enriched investigational datasets and proper ground truth, and provides guidance on how to best conduct AI prostate MRI studies. Published in parallel, a clinical study applying this suggested study design was applied to review and report a multiple-reader multiple-case clinical study of 150 bi-parametric prostate MRI studies across nine readers, measuring physician performance both with and without the use of a recently FDA cleared Artificial Intelligence software.1.
Collapse
Affiliation(s)
- Jeffrey H Maki
- University of Colorado Anschutz Medical Center, Department of Radiology, 12401 E 17th Ave (MS L954), Aurora, Colorado, USA.
| | - Nayana U Patel
- University of New Mexico Department of Radiology, Albuquerque, NM, USA
| | | | | | | |
Collapse
|
18
|
Lee TK, Park EH, Lee MH. Medical ethics and artificial intelligence in neurosurgery - How should we prepare? World Neurosurg 2024:S1878-8750(24)00629-6. [PMID: 38641244 DOI: 10.1016/j.wneu.2024.04.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 04/14/2024] [Indexed: 04/21/2024]
Abstract
The development of artificial intelligence (AI) raises ethical concerns about its side effects on the attitudes and behaviors of clinicians and medical practitioners. The authors aim to understand the medical ethics of AI-based chatbots and to suggest coping strategies for an emerging landscape of increased access and potential ambiguity using AI. This study examines the medical ethics of AI-based chatbots (ChatGPT, Bing Chat, and Google's Bard) using multiple-choice questions. ChatGPT and Bard correctly answered all questions (5/5), while Bing Chat correctly answered only three of five questions. ChatGPT explained answers simply. Bing Chat explained answers with references, and Bard provided additional explanations with details. AI has the potential to revolutionize medical fields by improving diagnosis accuracy, surgical planning, and treatment outcomes. By analyzing large amounts of data, AI can identify patterns and make predictions, aiding neurosurgeons in making informed decisions for increased patient wellbeing. As AI usage increases, the number of cases involving AI-entrusted judgments will rise, leading to the gradual emergence of ethical issues across interdisciplinary fields. The medical field will be no exception. This study suggests the need for safety measures to regulate medical ethics in the context of advancing AI. A system should be developed to verify and predict pertinent issues.
Collapse
Affiliation(s)
- Tae-Kyu Lee
- Department of Neurosurgery, Uijeongbu St. Mary's Hospital, School of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Eun Ho Park
- Nicholas Cardinal Cheong Graduate School for Life, The Catholic University of Korea, Seoul, Korea
| | - Min Ho Lee
- Department of Neurosurgery, Uijeongbu St. Mary's Hospital, School of Medicine, The Catholic University of Korea, Seoul, Korea.
| |
Collapse
|
19
|
Fransen SJ, Roest C, Van Lohuizen QY, Bosma JS, Simonis FFJ, Kwee TC, Yakar D, Huisman H. Using deep learning to optimize the prostate MRI protocol by assessing the diagnostic efficacy of MRI sequences. Eur J Radiol 2024; 175:111470. [PMID: 38640822 DOI: 10.1016/j.ejrad.2024.111470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 03/29/2024] [Accepted: 04/14/2024] [Indexed: 04/21/2024]
Abstract
PURPOSE To explore diagnostic deep learning for optimizing the prostate MRI protocol by assessing the diagnostic efficacy of MRI sequences. METHOD This retrospective study included 840 patients with a biparametric prostate MRI scan. The MRI protocol included a T2-weighted image, three DWI sequences (b50, b400, and b800 s/mm2), a calculated ADC map, and a calculated b1400 sequence. Two accelerated MRI protocols were simulated, using only two acquired b-values to calculate the ADC and b1400. Deep learning models were trained to detect prostate cancer lesions on accelerated and full protocols. The diagnostic performances of the protocols were compared on the patient-level with the area under the receiver operating characteristic (AUROC), using DeLong's test, and on the lesion-level with the partial area under the free response operating characteristic (pAUFROC), using a permutation test. Validation of the results was performed among expert radiologists. RESULTS No significant differences in diagnostic performance were found between the accelerated protocols and the full bpMRI baseline. Omitting b800 reduced 53% DWI scan time, with a performance difference of + 0.01 AUROC (p = 0.20) and -0.03 pAUFROC (p = 0.45). Omitting b400 reduced 32% DWI scan time, with a performance difference of -0.01 AUROC (p = 0.65) and + 0.01 pAUFROC (p = 0.73). Multiple expert radiologists underlined the findings. CONCLUSIONS This study shows that deep learning can assess the diagnostic efficacy of MRI sequences by comparing prostate MRI protocols on diagnostic accuracy. Omitting either the b400 or the b800 DWI sequence can optimize the prostate MRI protocol by reducing scan time without compromising diagnostic quality.
Collapse
Affiliation(s)
- Stefan J Fransen
- University Medical Centre Groningen, Department of Radiology, Hanzeplein 1, 9713 GZ, Groningen, the Netherlands.
| | - Christian Roest
- University Medical Centre Groningen, Department of Radiology, Hanzeplein 1, 9713 GZ, Groningen, the Netherlands
| | - Quintin Y Van Lohuizen
- University Medical Centre Groningen, Department of Radiology, Hanzeplein 1, 9713 GZ, Groningen, the Netherlands
| | - Joeran S Bosma
- University Medical Centre Nijmegen, DIAG, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Frank F J Simonis
- Technical University Twente, TechMed Centre, Hallenweg 5, 7522 NH, Enschede, the Netherlands
| | - Thomas C Kwee
- University Medical Centre Groningen, Department of Radiology, Hanzeplein 1, 9713 GZ, Groningen, the Netherlands
| | - Derya Yakar
- University Medical Centre Groningen, Department of Radiology, Hanzeplein 1, 9713 GZ, Groningen, the Netherlands
| | - Henkjan Huisman
- University Medical Centre Nijmegen, DIAG, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| |
Collapse
|
20
|
Aljamaan F, Malki KH, Alhasan K, Jamal A, Altamimi I, Khayat A, Alhaboob A, Abdulmajeed N, Alshahrani FS, Saad K, Al-Eyadhy A, Al-Tawfiq JA, Temsah MH. ChatGPT-3.5 System Usability Scale early assessment among Healthcare Workers: Horizons of adoption in medical practice. Heliyon 2024; 10:e28962. [PMID: 38623218 PMCID: PMC11016609 DOI: 10.1016/j.heliyon.2024.e28962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 02/26/2024] [Accepted: 03/27/2024] [Indexed: 04/17/2024] Open
Abstract
Artificial intelligence (AI) chatbots, such as ChatGPT, have widely invaded all domains of human life. They have the potential to transform healthcare future. However, their effective implementation hinges on healthcare workers' (HCWs) adoption and perceptions. This study aimed to evaluate HCWs usability of ChatGPT three months post-launch in Saudi Arabia using the System Usability Scale (SUS). A total of 194 HCWs participated in the survey. Forty-seven percent were satisfied with their usage, 57 % expressed moderate to high trust in its ability to generate medical decisions. 58 % expected ChatGPT would improve patients' outcomes, even though 84 % were optimistic of its potential to improve the future of healthcare practice. They expressed possible concerns like recommending harmful medical decisions and medicolegal implications. The overall mean SUS score was 64.52, equivalent to 50 % percentile rank, indicating high marginal acceptability of the system. The strongest positive predictors of high SUS scores were participants' belief in AI chatbot's benefits in medical research, self-rated familiarity with ChatGPT and self-rated computer skills proficiency. Participants' learnability and ease of use score correlated positively but weakly. On the other hand, medical students and interns had significantly high learnability scores compared to others, while ease of use scores correlated very strongly with participants' perception of positive impact of ChatGPT on the future of healthcare practice. Our findings highlight the HCWs' perceived marginal acceptance of ChatGPT at the current stage and their optimism of its potential in supporting them in future practice, especially in the research domain, in addition to humble ambition of its potential to improve patients' outcomes particularly in regard of medical decisions. On the other end, it underscores the need for ongoing efforts to build trust and address ethical and legal concerns of AI implications in healthcare. The study contributes to the growing body of literature on AI chatbots in healthcare, especially addressing its future improvement strategies and provides insights for policymakers and healthcare providers about the potential benefits and challenges of implementing them in their practice.
Collapse
Affiliation(s)
- Fadi Aljamaan
- College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
- Critical Care Department, College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
| | - Khalid H. Malki
- Research Chair of Voice, Swallowing, and Communication Disorders, Department of Otolaryngology, College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
| | - Khalid Alhasan
- College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
- Pediatric Department, College of Medicine, King Saud University Medical City, Riyadh 11362, Saudi Arabia
- Department of Kidney and Pancreas Transplant, Organ Transplant Center of Excellence, King Faisal Specialist Hospital and Research Center, Riyadh 11211, Saudi Arabia
| | - Amr Jamal
- College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
- Department of Family and Community Medicine, King Saud University Medical City, Riyadh 11362, Saudi Arabia
- Evidence-Based Health Care & Knowledge Translation Research Chair, Family & Community Medicine Department, College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
| | - Ibraheem Altamimi
- College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
| | - Afnan Khayat
- Health Information Management Department, Prince Sultan Military College of Health Sciences, Al Dhahran 34313, Saudi Arabia
| | - Ali Alhaboob
- College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
- Pediatric Department, College of Medicine, King Saud University Medical City, Riyadh 11362, Saudi Arabia
| | - Naif Abdulmajeed
- Pediatric Department, College of Medicine, King Saud University Medical City, Riyadh 11362, Saudi Arabia
- Pediatric Nephrology Department, Prince Sultan Military Medical City, Riyadh 11159, Saudi Arabia
| | - Fatimah S. Alshahrani
- College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
- Infectious Disease Division, Department of Medicine, King Saud University Medical City, Riyadh 11362, Saudi Arabia
| | - Khaled Saad
- Pediatric Department, Faculty of Medicine, Assiut University, Assiut 71516, Egypt
| | - Ayman Al-Eyadhy
- College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
- Pediatric Department, College of Medicine, King Saud University Medical City, Riyadh 11362, Saudi Arabia
| | - Jaffar A. Al-Tawfiq
- Specialty Internal Medicine and Quality Department, Johns Hopkins Aramco Healthcare, Dhahran 34465, Saudi Arabia
- Infectious Disease Division, Department of Medicine, Indiana University School of Medicine, Indianapolis, IN46202, USA
- Infectious Disease Division, Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, MD21218, USA
| | - Mohamad-Hani Temsah
- College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
- Pediatric Department, College of Medicine, King Saud University Medical City, Riyadh 11362, Saudi Arabia
- Evidence-Based Health Care & Knowledge Translation Research Chair, Family & Community Medicine Department, College of Medicine, King Saud University, Riyadh 11362, Saudi Arabia
| |
Collapse
|
21
|
Cohen I, Sorin V, Lekach R, Raskin D, Segev M, Klang E, Eshed I, Barash Y. Artificial intelligence for detection of effusion and lipo-hemarthrosis in X-rays and CT of the knee. Eur J Radiol 2024; 175:111460. [PMID: 38608501 DOI: 10.1016/j.ejrad.2024.111460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 03/29/2024] [Accepted: 04/08/2024] [Indexed: 04/14/2024]
Abstract
BACKGROUND Traumatic knee injuries are challenging to diagnose accurately through radiography and to a lesser extent, through CT, with fractures sometimes overlooked. Ancillary signs like joint effusion or lipo-hemarthrosis are indicative of fractures, suggesting the need for further imaging. Artificial Intelligence (AI) can automate image analysis, improving diagnostic accuracy and help prioritizing clinically important X-ray or CT studies. OBJECTIVE To develop and evaluate an AI algorithm for detecting effusion of any kind in knee X-rays and selected CT images and distinguishing between simple effusion and lipo-hemarthrosis indicative of intra-articular fractures. METHODS This retrospective study analyzed post traumatic knee imaging from January 2016 to February 2023, categorizing images into lipo-hemarthrosis, simple effusion, or normal. It utilized the FishNet-150 algorithm for image classification, with class activation maps highlighting decision-influential regions. The AI's diagnostic accuracy was validated against a gold standard, based on the evaluations made by a radiologist with at least four years of experience. RESULTS Analysis included CT images from 515 patients and X-rays from 637 post traumatic patients, identifying lipo-hemarthrosis, simple effusion, and normal findings. The AI showed an AUC of 0.81 for detecting any effusion, 0.78 for simple effusion, and 0.83 for lipo-hemarthrosis in X-rays; and 0.89, 0.89, and 0.91, respectively, in CTs. CONCLUSION The AI algorithm effectively detects knee effusion and differentiates between simple effusion and lipo-hemarthrosis in post-traumatic patients for both X-rays and selected CT images further studies are needed to validate these results.
Collapse
Affiliation(s)
- Israel Cohen
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - Vera Sorin
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - Ruth Lekach
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel; Department of Nuclear Medicine, Sourasky Medical Center, Tel-Aviv, Israel.
| | - Daniel Raskin
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - Maria Segev
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - Iris Eshed
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - Yiftach Barash
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| |
Collapse
|
22
|
Treat RM, Hsiao SK, Ismail A, Javan R. The US Government's Latest Presidential Executive Order on AI: Potential Implications in Radiology. J Am Coll Radiol 2024:S1546-1440(24)00355-7. [PMID: 38599359 DOI: 10.1016/j.jacr.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 03/28/2024] [Accepted: 04/04/2024] [Indexed: 04/12/2024]
Affiliation(s)
- Rachel Michelle Treat
- George Washington University of Health Sciences and Medicine, Washington, DC, USA, 20037.
| | - Sabrina Kelly Hsiao
- George Washington University of Health Sciences and Medicine, Washington, DC, USA, 20037.
| | - Ahmed Ismail
- George Washington University of Health Sciences and Medicine, Washington, DC, USA, 20037.
| | - Ramin Javan
- Department of Radiology, George Washington University Hospital, Washington, DC, USA, 20037.
| |
Collapse
|
23
|
Uddin MJ, Sherrell J, Emami A, Khaleghian M. Application of Artificial Intelligence and Sensor Fusion for Soil Organic Matter Prediction. Sensors (Basel) 2024; 24:2357. [PMID: 38610568 PMCID: PMC11014143 DOI: 10.3390/s24072357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 03/11/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024]
Abstract
Soil organic matter (SOM) is one of the best indicators to assess soil health and understand soil productivity and fertility. Therefore, measuring SOM content is a fundamental practice in soil science and agricultural research. The traditional approach (oven-dry) of measuring SOM is a costly, arduous, and time-consuming process. However, the integration of cutting-edge technology can significantly aid in the prediction of SOM, presenting a promising alternative to traditional methods. In this study, we tested the hypothesis that an accurate estimate of SOM might be obtained by combining the ground-based sensor-captured soil parameters and soil analysis data along with drone images of the farm. The data are gathered using three different methods: ground-based sensors detect soil parameters such as temperature, pH, humidity, nitrogen, phosphorous, and potassium of the soil; aerial photos taken by UAVs display the vegetative index (NDVI); and the Haney test of soil analysis reports measured in a lab from collected samples. Our datasets combined the soil parameters collected using ground-based sensors, soil analysis reports, and NDVI content of farms to perform the data analysis to predict SOM using different machine learning algorithms. We incorporated regression and ANOVA for analyzing the dataset and explored seven different machine learning algorithms, such as linear regression, Ridge regression, Lasso regression, random forest regression, Elastic Net regression, support vector machine, and Stochastic Gradient Descent regression to predict the soil organic matter content using other parameters as predictors.
Collapse
Affiliation(s)
| | | | - Anahita Emami
- College of Science and Engineering, Texas State University, San Marcos, TX 78666, USA; (M.J.U.); (J.S.)
| | - Meysam Khaleghian
- College of Science and Engineering, Texas State University, San Marcos, TX 78666, USA; (M.J.U.); (J.S.)
| |
Collapse
|
24
|
Filer CN. Artificial intelligence and natural product research. Nat Prod Res 2024:1-3. [PMID: 38588438 DOI: 10.1080/14786419.2024.2333048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 03/11/2024] [Indexed: 04/10/2024]
|
25
|
Bhardwaj N, Sood M, Gill SS. 3D-Bioprinting and AI-empowered Anatomical Structure Designing: A Review. Curr Med Imaging 2024; 20:CMIR-EPUB-139656. [PMID: 38591214 DOI: 10.2174/0115734056259274231019061329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 09/05/2023] [Accepted: 09/23/2023] [Indexed: 04/10/2024]
Abstract
BACKGROUND The recent advancements and detailed studies in the field of 3D bioprinting have made it a promising avenue in the field of organ shortage, where many patients die awaiting transplantation. The main challenges bioprinting faces are precision during printing, vascularization, and cell proliferation. Additionally, overcoming these shortcomings requires experts from engineering, medicine, physics, etc., and if accomplished, it will significantly benefit humankind. OBJECTIVE This paper covers the general roadmap of the bioprinting process, different kinds of bioinks, and available bioprinters. The paper also includes designing the anatomical structure, which is the first phase of the bioprinting process, and how AI has facilitated this entire process of 3D printing in healthcare and associated applications like medical modelling and disease modelling. METHODS The process of 3D bioprinting involves meticulous structure designing of the anatomical structure under study, which forms the base of the entire bioprinting process. One of the significant applications of 3D printing in healthcare is Medical Modelling and Disease Modelling, which requires the detection of disease in anatomy and its delineation from the rest of anatomy for meticulous creation of ROI using sophisticated segmentation software(s) for the construction of 3D models of diseased anatomy and healthy anatomical surroundings. CONCLUSION The study concluded that bioprinting is the future of the worldwide organ transplantation crisis. Anatomical accuracy is an important aspect that must be considered while producing 3D models. The reproduction of patient-specific 3D models requires human rights and ethics approval under four principles of ethics in healthcare: autonomy, non-maleficence, beneficence, and justice.
Collapse
Affiliation(s)
- Neha Bhardwaj
- Department of Electronics & Communication Engineering, National Institute of Technical Teachers Training and Research, India
| | - Meenakshi Sood
- CDC Department, National Institute of Technical Teachers Training and Research, India
| | - Sandeep Singh Gill
- IMEE Department, National Institute of Technical Teachers Training and Research, India
| |
Collapse
|
26
|
Chow BJW, Fayyazifar N, Balamane S, Saha N, Clarkin O, Green M, Maiorana A, Golian M, Dwivedi G. Interpreting Wide-Complex Tachycardia using Artificial Intelligence. Can J Cardiol 2024:S0828-282X(24)00296-4. [PMID: 38588794 DOI: 10.1016/j.cjca.2024.03.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 03/19/2024] [Accepted: 03/31/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Adopting artificial intelligence in medicine may improve speed and accuracy in patient diagnosis. We sought to develop an artificial intelligence (AI) algorithm to interpret wide complex tachycardia (WCT) electrocardiograms (ECG) and compare its diagnostic accuracy to cardiologists. METHODS Using 3330 WCT ECGs (2906 SVT and 424 VT), we created a training/validation (3131) and test set (199 ECGs). A convolutional neural network (CNN) structure using a modification of differentiable architecture search (DARTS), ZeroLess-DARTS, was developed to differentiate between SVT and VT. RESULTS The mean accuracy of electrophysiology (EP) cardiologists was 92.5% with a sensitivity of 91.7%, specificity of 93.4%, positive predictive value of 93.7%, negative predictive value of 91.7%. NonEP cardiologists had an accuracy of 73.2 ± 14.4% with a sensitivity, specificity, positive and negative predictive value of 59.8 ± 18.2%, 93.8 ± 3.7%, 93.6 ± 2.3%, and 73.2 ± 14.4%, respectively. AI had superior sensitivity and accuracy (91.9% and 93.0%, respectively) than NonEP cardiologists, and had similar performance of EP cardiologists. Mean time to interpret each ECG varied between 10.1-13.8 seconds for EP cardiologists and 3.1 -16.6 seconds for NonEP cardiologists. Conversely AI required a mean of 0.0092 ± 0.0035 seconds for each ECG interpretation. CONCLUSIONS AI appears to diagnose WCT with superior accuracy than Cardiologists and similar to those of Electrophysiologists. Using AI to assist with ECG interpretations may improve patient care.
Collapse
Affiliation(s)
- Benjamin J W Chow
- University of Ottawa Heart Institute, Canada, Department of Medicine (Cardiology); University of Ottawa, Canada, Department of Radiology.
| | - Najmeh Fayyazifar
- Harry Perkins Institute of Medical Research, The University of Western Australia, Australia; Fiona Stanley Hospital, Department of Cardiology, Australia
| | - Saad Balamane
- University of Ottawa Heart Institute, Canada, Department of Medicine (Cardiology)
| | - Nishita Saha
- University of Ottawa Heart Institute, Canada, Department of Medicine (Cardiology)
| | - Owen Clarkin
- University of Ottawa Heart Institute, Canada, Department of Medicine (Cardiology)
| | - Martin Green
- University of Ottawa Heart Institute, Canada, Department of Medicine (Cardiology)
| | - Andrew Maiorana
- Fiona Stanley Hospital, Murdoch, WA, Australia; School of Allied Health, Faculty of Health Sciences, Curtin University, Australia
| | - Mehrdad Golian
- University of Ottawa Heart Institute, Canada, Department of Medicine (Cardiology)
| | - Girish Dwivedi
- Harry Perkins Institute of Medical Research, The University of Western Australia, Australia; Fiona Stanley Hospital, Department of Cardiology, Australia
| |
Collapse
|
27
|
Paule-Mercado MC, Rabaneda-Bueno R, Porcal P, Kopacek M, Huneau F, Vystavna Y. Climate and land use shape the water balance and water quality in selected European lakes. Sci Rep 2024; 14:8049. [PMID: 38580788 PMCID: PMC10997787 DOI: 10.1038/s41598-024-58401-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 03/28/2024] [Indexed: 04/07/2024] Open
Abstract
This study provides insights into factors that influence the water balance of selected European lakes, mainly in Central Europe, and their implications for water quality. An analysis of isotopic, chemical and land use data using statistical and artificial intelligence models showed that climate, particularly air temperature and precipitation, played a key role in intensifying evaporation losses from the lakes. Water balance was also affected by catchment factors, notably groundwater table depth. The study shows that lakes at lower altitudes with shallow depths and catchments dominated by urban or crop cover were more sensitive to water balance changes. These lakes had higher evaporation-to-inflow ratios and increased concentrations of total nitrogen in the water. On the other hand, lakes at higher elevations with deeper depths and prevailing forest cover in the catchment were less sensitive to water balance changes. These lakes, which are often of glacial origin, were characterized by lower evaporation losses and thus better water quality in terms of total nitrogen concentrations. Understanding connections between water balance and water quality is crucial for effective lake management and the preservation of freshwater ecosystems.
Collapse
Affiliation(s)
- Ma Cristina Paule-Mercado
- Biology Centre, Institute of Hydrobiology, Academy of Sciences of the Czech Republic, Na Sádkách 7, 37005, České Budějovice, Czech Republic
| | - Rubén Rabaneda-Bueno
- Biology Centre, Institute of Hydrobiology, Academy of Sciences of the Czech Republic, Na Sádkách 7, 37005, České Budějovice, Czech Republic
| | - Petr Porcal
- Biology Centre, Institute of Hydrobiology, Academy of Sciences of the Czech Republic, Na Sádkách 7, 37005, České Budějovice, Czech Republic
| | - Marek Kopacek
- Biology Centre, Institute of Hydrobiology, Academy of Sciences of the Czech Republic, Na Sádkách 7, 37005, České Budějovice, Czech Republic
- Faculty of Science, University of South Bohemia in České Budějovice, Branišovská 1760, 370 05, České Budějovice, Czech Republic
| | - Frederic Huneau
- Département d'Hydrogéologie, Université de Corse Pascal Paoli, BP52, 20250, Corte, France
- Centre National de la Recherche Scientifique (CNRS), UMR 6134 SPE, 20250, Corte, France
| | - Yuliya Vystavna
- Biology Centre, Institute of Hydrobiology, Academy of Sciences of the Czech Republic, Na Sádkách 7, 37005, České Budějovice, Czech Republic.
| |
Collapse
|
28
|
Tavares J. Application of Artificial Intelligence in Healthcare: The Need for More Interpretable Artificial Intelligence. ACTA MEDICA PORT 2024. [PMID: 38577873 DOI: 10.20344/amp.20469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 12/27/2023] [Indexed: 04/06/2024]
Affiliation(s)
- Jorge Tavares
- NOVA Information Management School (NOVA IMS). Universidade NOVA de Lisboa. Lisbon. Portugal
| |
Collapse
|
29
|
Cold KM, Vamadevan A, Vilmann AS, Svendsen MBS, Konge L, Bjerrum F. Computer-aided quality assessment of endoscopist competence during colonoscopy: A systematic review. Gastrointest Endosc 2024:S0016-5107(24)00219-0. [PMID: 38580134 DOI: 10.1016/j.gie.2024.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2024] [Revised: 03/28/2024] [Accepted: 04/01/2024] [Indexed: 04/07/2024]
Abstract
BACKGROUND AND AIMS Endoscopists' competence can vary widely, as shown in the variation in adenoma detection rate (ADR). Computer-aided quality assessment (CAQ) can automatically assess performance during individual procedures. This review aims to identify and describe different CAQ systems for colonoscopy. METHODS A systematic review of the literature was done using MEDLINE, EMBASE, and SCOPUS based on three blocks of terms according to the inclusion criteria: Colonoscopy, Competence assessment, and Automatic evaluation. Articles were systematically reviewed by two reviewers, first by abstract and then in full text. The methodological quality was assessed using the Medical Education Research Study Quality Instrument (MERSQI). RESULTS 12,575 studies were identified, 6,831 remained after removal of duplicates, and 6,806 did not pass the eligibility criteria and were excluded, leaving thirteen studies for final analysis. Five categories of CAQ systems were identified: Withdrawal speedometer (seven studies), Scope movement analysis (three studies), Effective withdrawal time (one study), Fold examination quality (one study), and Visual gaze pattern (one study). The withdrawal speedometer was the only CAQ system that tested its feedback by examining changes in ADR. Three studies observed an improvement in ADR, and two studies did not. The methodological quality of the studies was high (mean MERSQI 15.2 points, maximum 18 points). CONCLUSIONS Thirteen studies developed or tested CAQ systems, most frequently by correlating it to ADR. Only five studies tested feedback by implementing the CAQ system. A meta-analysis was impossible due to the heterogeneous study designs, and more studies are warranted.
Collapse
Affiliation(s)
- Kristoffer Mazanti Cold
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, the Capital Region of Denmark; Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Anishan Vamadevan
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, the Capital Region of Denmark
| | - Andreas Slot Vilmann
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, the Capital Region of Denmark; Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark; Gastrounit, Surgical section, Copenhagen University Hospital - Herlev and Gentofte, Herlev, Denmark
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, the Capital Region of Denmark; Department of Computer Science, Faculty of Science, University of Copenhagen, Copenhagen, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, the Capital Region of Denmark; Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, the Capital Region of Denmark; Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark; Gastrounit, Surgical section, Copenhagen University Hospital - Amager and Hvidovre, Hvidovre, Denmark
| |
Collapse
|
30
|
Ilkic J, Milovanovic M, Marinkovic V. Prospective systematic risk analysis of the digital technology use within pharmaceutical care. J Am Pharm Assoc (2003) 2024:102081. [PMID: 38579967 DOI: 10.1016/j.japh.2024.102081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 03/16/2024] [Accepted: 03/28/2024] [Indexed: 04/07/2024]
Abstract
BACKGROUND Digital technologies are present in every phase of a drug lifecycle, from drug design and development to its dispensing and use. However, given the rapid development and implementation of digital solutions, their monitoring, evaluation and risk assessment are limited and lacking. OBJECTIVE This research is aiming to identify potential errors, quantify and prioritize associated risks in the context of certain technologies used in pharmaceutical care, as well as define corrective measures to improve patient safety and the quality of pharmaceutical care. METHODS A ten-member multidisciplinary team conducted Failure Mode & Effect Analysis (FMEA) to identify critical risks, their causes and effects, along with developing corrective measures within the selected digital health components: Telepharmacy, mHealth, Artificial intelligence (AI) and Software infrastructure and systems. Critical risks were determined by calculating risk priority numbers (RPNs) from severity, occurence, and detectability scores. RESULTS This study identified 42 risks regarding the 4 components. After calculating RPNs and the threshold RPN (RPN=30), 8 critical risks were identified. Corrective measures were proposed for these failure modes, after which the risks were re-evaluated (RPN sum was reduced from 414 to 156). The risk with the highest RPN value was Internet/identity fraud, while the rest included inadequate and incomplete data entry and management, flawed implementation, human and technology errors, and lack of transparency, personalization and infrastructure. For the critical risks, 42 different causes were recognized on a system, technological and individual level while their effects were discussed in terms of patient safety and business management in pharmacies. CONCLUSION Digitalization of pharmaceutical practice promises greater effectiveness of pharmaceutical care, but in order to achieve this, efforts, resources and initiatives must be directed towards timely identification of problems, appropriate monitoring and building adequate infrastructure that can support safe implementation of digital tools and services despite the swift development of innovations.
Collapse
Affiliation(s)
- Jovana Ilkic
- PhD student, Department of Social Pharmacy and Pharmaceutical Legislation, Faculty of Pharmacy, University of Belgrade, Belgrade, Serbia.
| | - Milos Milovanovic
- Professor, Department of Information Technology, Faculty of Organizational Sciences, University of Belgrade, Belgrade, Serbia
| | - Valentina Marinkovic
- Professor, Department of Social Pharmacy and Pharmaceutical Legislation, Faculty of Pharmacy, University of Belgrade, Belgrade, Serbia
| |
Collapse
|
31
|
Zarra F, Gandhi DN, Karki A, Chaurasia B. Letter: Chat-GPT on brain tumors: An examination of Artificial Intelligence/Machine Learning's ability to provide diagnoses and treatment plans for example neuro-oncology cases. Clin Neurol Neurosurg 2024; 240:108270. [PMID: 38604084 DOI: 10.1016/j.clineuro.2024.108270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 03/29/2024] [Indexed: 04/13/2024]
Affiliation(s)
- Francisco Zarra
- Department of Neurosurgery, University of Buenos Aires School of Medicine, Buenos Aires, Argentina.
| | | | - Aakriti Karki
- Department of Psychiatry, Jalalabad Ragib Rabeya Medical College Hospital, Bangladesh.
| | - Bipin Chaurasia
- Department of Neurosurgery, Neurosurgery Clinic, Birgunj, Nepal.
| |
Collapse
|
32
|
Solomonov A, Kozell A, Shimanovich U. Designing Multifunctional Biomaterials via Protein Self-Assembly. Angew Chem Int Ed Engl 2024; 63:e202318365. [PMID: 38206201 DOI: 10.1002/anie.202318365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 12/27/2023] [Accepted: 01/05/2024] [Indexed: 01/12/2024]
Abstract
Protein self-assembly is a fundamental biological process where proteins spontaneously organize into complex and functional structures without external direction. This process is crucial for the formation of various biological functionalities. However, when protein self-assembly fails, it can trigger the development of multiple disorders, thus making understanding this phenomenon extremely important. Up until recently, protein self-assembly has been solely linked either to biological function or malfunction; however, in the past decade or two it has also been found to hold promising potential as an alternative route for fabricating materials for biomedical applications. It is therefore necessary and timely to summarize the key aspects of protein self-assembly: how the protein structure and self-assembly conditions (chemical environments, kinetics, and the physicochemical characteristics of protein complexes) can be utilized to design biomaterials. This minireview focuses on the basic concepts of forming supramolecular structures, and the existing routes for modifications. We then compare the applicability of different approaches, including compartmentalization and self-assembly monitoring. Finally, based on the cutting-edge progress made during the last years, we summarize the current knowledge about tailoring a final function by introducing changes in self-assembly and link it to biomaterials' performance.
Collapse
Affiliation(s)
- Aleksei Solomonov
- Department of Molecular Chemistry and Materials Science, Weizmann Institute of Science, 234 Herzl st., Rehovot, 76100, Israel
| | - Anna Kozell
- Department of Molecular Chemistry and Materials Science, Weizmann Institute of Science, 234 Herzl st., Rehovot, 76100, Israel
| | - Ulyana Shimanovich
- Department of Molecular Chemistry and Materials Science, Weizmann Institute of Science, 234 Herzl st., Rehovot, 76100, Israel
| |
Collapse
|
33
|
Hollman JH, Cloud-Biebl BA, Krause DA, Calley DQ. Detecting Artificial Intelligence-Generated Personal Statements in Professional Physical Therapist Education Program Applications: A Lexical Analysis. Phys Ther 2024; 104:pzae006. [PMID: 38243411 DOI: 10.1093/ptj/pzae006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 09/21/2023] [Accepted: 12/20/2023] [Indexed: 01/21/2024]
Abstract
OBJECTIVE The objective of this study was to compare the lexical sophistication of personal statements submitted by professional physical therapist education program applicants with those generated by OpenAI's Chat Generative Pretrained Transformer (ChatGPT). METHODS Personal statements from 152 applicants and 20 generated by ChatGPT were collected, all in response to a standardized prompt. These statements were coded numerically, then analyzed with recurrence quantification analyses (RQAs). RQA indices including recurrence, determinism, max line, mean line, and entropy were compared with t-tests. A receiver operating characteristic curve analysis was used to examine discriminative validity of RQA indices to distinguish between ChatGPT and human-generated personal statements. RESULTS ChatGPT-generated personal statements exhibited higher recurrence, determinism, mean line, and entropy values than did human-generated personal statements. The strongest discriminator was a 13.04% determinism rate, which differentiated ChatGPT from human-generated writing samples with 70% sensitivity and 91.4% specificity (positive likelihood ratio = 8.14). Personal statements with determinism rates exceeding 13% were 8 times more likely to have been ChatGPT than human generated. CONCLUSION Although RQA can distinguish artificial intelligence (AI)-generated text from human-generated text, it is not absolute. Thus, AI introduces additional challenges to the authenticity and utility of personal statements. Admissions committees along with organizations providing guidelines in professional physical therapist education program admissions should reevaluate the role of personal statements in applications. IMPACT As AI-driven chatbots like ChatGPT complicate the evaluation of personal statements, RQA emerges as a potential tool for admissions committees to detect AI-generated statements.
Collapse
Affiliation(s)
- John H Hollman
- Program in Physical Therapy, Department of Physical Medicine and Rehabilitation, Mayo Clinic School of Health Sciences, Mayo Clinic College of Medicine and Science, Mayo Clinic, Rochester, Minnesota, United States
| | - Beth A Cloud-Biebl
- Program in Physical Therapy, Department of Physical Medicine and Rehabilitation, Mayo Clinic School of Health Sciences, Mayo Clinic College of Medicine and Science, Mayo Clinic, Rochester, Minnesota, United States
| | - David A Krause
- Program in Physical Therapy, Department of Physical Medicine and Rehabilitation, Mayo Clinic School of Health Sciences, Mayo Clinic College of Medicine and Science, Mayo Clinic, Rochester, Minnesota, United States
| | - Darren Q Calley
- Program in Physical Therapy, Department of Physical Medicine and Rehabilitation, Mayo Clinic School of Health Sciences, Mayo Clinic College of Medicine and Science, Mayo Clinic, Rochester, Minnesota, United States
| |
Collapse
|
34
|
Upreti G. Advancements in Skull Base Surgery: Navigating Complex Challenges with Artificial Intelligence. Indian J Otolaryngol Head Neck Surg 2024; 76:2184-2190. [PMID: 38566692 PMCID: PMC10982213 DOI: 10.1007/s12070-023-04415-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 11/28/2023] [Indexed: 04/04/2024] Open
Abstract
Purpose This narrative review examines the evolving landscape of artificial intelligence (AI) integration in skull base surgery, exploring its multifaceted applications and impact on various aspects of patient care. Methods Extensive literature review was conducted to gather insights into the role of AI in skull base surgery. Key aspects such as diagnosis, image analysis, surgical planning, navigation, predictive analytics, clinical decision-making, postoperative care, rehabilitation, and virtual simulations were explored. Studies were sourced from PubMed using keyword search strategy for relevant headings, sub-headings and cross-referencing. Results AI enhances early diagnosis through diagnostic algorithms that guide investigations based on clinical and radiological data. AI-driven image analysis enables accurate segmentation of intricate structures and extraction of radiomics data, optimizing preoperative planning and predicting treatment response. In surgical planning, AI aids in identifying critical structures, leading to precise interventions. Real-time AI-based navigation offers adaptive guidance, enhancing surgical accuracy and safety. Predictive analytics empower risk assessment, treatment planning, and outcome prediction. AI-driven clinical decision support systems optimize resource allocation and support shared decision-making. Postoperative care benefits from AI's monitoring capabilities and personalized rehabilitation protocols. Virtual simulations powered by AI expedite skill development and decision-making in complex procedures. Conclusion AI contributes to accurate diagnosis, surgical planning, navigation, predictive analysis, and postoperative care. Ethical considerations and data quality assurance are essential, ensuring responsible AI implementation. While AI serves as a valuable complement to clinical expertise, its potential to enhance decision-making, precision, and efficiency in skull base surgery is evident.
Collapse
Affiliation(s)
- Garima Upreti
- Department of Otorhinolaryngology, All India Institute of Medical Sciences, Rajkot, Gujarat India
| |
Collapse
|
35
|
Sneag DB, Queler SC, Campbell G, Colucci PG, Lin J, Lin Y, Wen Y, Li Q, Tan ET. Optimized 3D brachial plexus MR neurography using deep learning reconstruction. Skeletal Radiol 2024; 53:779-789. [PMID: 37914895 DOI: 10.1007/s00256-023-04484-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 10/12/2023] [Accepted: 10/14/2023] [Indexed: 11/03/2023]
Abstract
OBJECTIVE To evaluate whether 'fast,' unilateral, brachial plexus, 3D magnetic resonance neurography (MRN) acquisitions with deep learning reconstruction (DLR) provide similar image quality to longer, 'standard' scans without DLR. MATERIALS AND METHODS An IRB-approved prospective cohort of 30 subjects (13F; mean age = 50.3 ± 17.8y) underwent clinical brachial plexus 3.0 T MRN with 3D oblique-coronal STIR-T2-weighted-FSE. 'Standard' and 'fast' scans (time reduction = 23-48%, mean = 33%) were reconstructed without and with DLR. Evaluation of signal-to-noise ratio (SNR) and edge sharpness was performed for 4 image stacks: 'standard non-DLR,' 'standard DLR,' 'fast non-DLR,' and 'fast DLR.' Three raters qualitatively evaluated 'standard non-DLR' and 'fast DLR' for i) bulk motion (4-point scale), ii) nerve conspicuity of proximal and distal suprascapular and axillary nerves (5-point scale), and iii) nerve signal intensity, size, architecture, and presence of a mass (binary). ANOVA or Wilcoxon signed rank test compared differences. Gwet's agreement coefficient (AC2) assessed inter-rater agreement. RESULTS Quantitative SNR and edge sharpness were superior for DLR versus non-DLR (SNR by + 4.57 to + 6.56 [p < 0.001] for 'standard' and + 4.26 to + 4.37 [p < 0.001] for 'fast;' sharpness by + 0.23 to + 0.52/pixel for 'standard' [p < 0.018] and + 0.21 to + 0.25/pixel for 'fast' [p < 0.003]) and similar between 'standard non-DLR' and 'fast DLR' (SNR: p = 0.436-1, sharpness: p = 0.067-1). Qualitatively, 'standard non-DLR' and 'fast DLR' had similar motion artifact, as well as nerve conspicuity, signal intensity, size and morphology, with high inter-rater agreement (AC2: 'standard' = 0.70-0.98, 'fast DLR' = 0.69-0.97). CONCLUSION DLR applied to faster, 3D MRN acquisitions provides similar image quality to standard scans. A faster, DL-enabled protocol may replace currently optimized non-DL protocols.
Collapse
Affiliation(s)
- D B Sneag
- Department of Radiology and Imaging, Hospital for Special Surgery, 535 E. 70Th St., New York, NY, 10021, USA.
- Weill Medical College of Cornell, New York, NY, USA.
| | - S C Queler
- Department of Radiology and Imaging, Hospital for Special Surgery, 535 E. 70Th St., New York, NY, 10021, USA
- College of Medicine, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - G Campbell
- Department of Radiology and Imaging, Hospital for Special Surgery, 535 E. 70Th St., New York, NY, 10021, USA
| | - P G Colucci
- Department of Radiology and Imaging, Hospital for Special Surgery, 535 E. 70Th St., New York, NY, 10021, USA
| | - J Lin
- Department of Radiology and Imaging, Hospital for Special Surgery, 535 E. 70Th St., New York, NY, 10021, USA
| | - Y Lin
- Department of Radiology and Imaging, Hospital for Special Surgery, 535 E. 70Th St., New York, NY, 10021, USA
| | - Y Wen
- GE Healthcare, Waukesha, WI, USA
| | - Q Li
- Department of Radiology and Imaging, Hospital for Special Surgery, 535 E. 70Th St., New York, NY, 10021, USA
| | - E T Tan
- Department of Radiology and Imaging, Hospital for Special Surgery, 535 E. 70Th St., New York, NY, 10021, USA
| |
Collapse
|
36
|
Bajaj S, Gandhi D, Nayar D. Potential Applications and Impact of ChatGPT in Radiology. Acad Radiol 2024; 31:1256-1261. [PMID: 37802673 DOI: 10.1016/j.acra.2023.08.039] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 08/15/2023] [Accepted: 08/28/2023] [Indexed: 10/08/2023]
Abstract
Radiology has always gone hand-in-hand with technology and artificial intelligence (AI) is not new to the field. While various AI devices and algorithms have already been integrated in the daily clinical practice of radiology, with applications ranging from scheduling patient appointments to detecting and diagnosing certain clinical conditions on imaging, the use of natural language processing and large language model based software have been in discussion for a long time. Algorithms like ChatGPT can help in improving patient outcomes, increasing the efficiency of radiology interpretation, and aiding in the overall workflow of radiologists and here we discuss some of its potential applications.
Collapse
Affiliation(s)
- Suryansh Bajaj
- Department of Radiology, University of Arkansas for Medical Sciences, Little Rock, Arkansas 72205 (S.B.)
| | - Darshan Gandhi
- Department of Diagnostic Radiology, University of Tennessee Health Science Center, Memphis, Tennessee 38103 (D.G.).
| | - Divya Nayar
- Department of Neurology, University of Arkansas for Medical Sciences, Little Rock, Arkansas 72205 (D.N.)
| |
Collapse
|
37
|
Lechien JR. Generative artificial intelligence in otolaryngology-head and neck surgery editorial: be an actor of the future or follower. Eur Arch Otorhinolaryngol 2024; 281:2051-2053. [PMID: 38407611 DOI: 10.1007/s00405-024-08579-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Affiliation(s)
- Jerome R Lechien
- Division of Laryngology and Broncho-Esophagology, Department of Otolaryngology-Head Neck Surgery, EpiCURA Hospital, UMONS Research Institute for Health Sciences and Technology, University of Mons (UMons), Mons, Belgium.
- Department of Otorhinolaryngology and Head and Neck Surgery, Foch Hospital, School of Medicine, Phonetics and Phonology Laboratory (UMR 7018 CNRS, Université Sorbonne Nouvelle/Paris 3), Paris, France.
- Department of Otorhinolaryngology and Head and Neck Surgery, CHU de Bruxelles, CHU Saint-Pierre, School of Medicine, Brussels, Belgium.
- Polyclinique Elsan de Poitiers, Poitiers, France.
- Department of Human Anatomy and Experimental Oncology, Faculty of Medicine, UMONS Research Institute for Health Sciences and Technology, Avenue du Champ de Mars, 6, B7000, Mons, Belgium.
| |
Collapse
|
38
|
Partiot E, Gorda B, Lutz W, Lebrun S, Khalfi P, Mora S, Charlot B, Majzoub K, Desagher S, Ganesh G, Colomb S, Gaudin R. Organotypic culture of human brain explants as a preclinical model for AI-driven antiviral studies. EMBO Mol Med 2024; 16:1004-1026. [PMID: 38472366 PMCID: PMC11018746 DOI: 10.1038/s44321-024-00039-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 03/14/2024] Open
Abstract
Viral neuroinfections represent a major health burden for which the development of antivirals is needed. Antiviral compounds that target the consequences of a brain infection (symptomatic treatment) rather than the cause (direct-acting antivirals) constitute a promising mitigation strategy that requires to be investigated in relevant models. However, physiological surrogates mimicking an adult human cortex are lacking, limiting our understanding of the mechanisms associated with viro-induced neurological disorders. Here, we optimized the Organotypic culture of Post-mortem Adult human cortical Brain explants (OPAB) as a preclinical platform for Artificial Intelligence (AI)-driven antiviral studies. OPAB shows robust viability over weeks, well-preserved 3D cytoarchitecture, viral permissiveness, and spontaneous local field potential (LFP). Using LFP as a surrogate for neurohealth, we developed a machine learning framework to predict with high confidence the infection status of OPAB. As a proof-of-concept, we showed that antiviral-treated OPAB could partially restore LFP-based electrical activity of infected OPAB in a donor-dependent manner. Together, we propose OPAB as a physiologically relevant and versatile model to study neuroinfections and beyond, providing a platform for preclinical drug discovery.
Collapse
Affiliation(s)
- Emma Partiot
- CNRS, Institut de Recherche en Infectiologie de Montpellier (IRIM), 34293, Montpellier, France
- Univ Montpellier, 34090, Montpellier, France
| | - Barbara Gorda
- CNRS, Institut de Recherche en Infectiologie de Montpellier (IRIM), 34293, Montpellier, France
- Univ Montpellier, 34090, Montpellier, France
| | - Willy Lutz
- CNRS, Institut de Recherche en Infectiologie de Montpellier (IRIM), 34293, Montpellier, France
- Univ Montpellier, 34090, Montpellier, France
| | - Solène Lebrun
- CNRS, Institut de Recherche en Infectiologie de Montpellier (IRIM), 34293, Montpellier, France
- Univ Montpellier, 34090, Montpellier, France
| | - Pierre Khalfi
- Univ Montpellier, 34090, Montpellier, France
- CNRS, Institut de Génétique Moléculaire de Montpellier (IGMM), 34293, Montpellier, France
| | - Stéphan Mora
- Univ Montpellier, 34090, Montpellier, France
- CNRS, Institut de Génétique Moléculaire de Montpellier (IGMM), 34293, Montpellier, France
| | - Benoit Charlot
- Univ Montpellier, 34090, Montpellier, France
- Institut d'Electronique et des Systèmes IES, CNRS, 860 Rue de St - Priest Bâtiment 5, 34090, Montpellier, France
| | - Karim Majzoub
- Univ Montpellier, 34090, Montpellier, France
- CNRS, Institut de Génétique Moléculaire de Montpellier (IGMM), 34293, Montpellier, France
| | - Solange Desagher
- CNRS, Institut de Recherche en Infectiologie de Montpellier (IRIM), 34293, Montpellier, France
- Univ Montpellier, 34090, Montpellier, France
- CNRS, Institut de Génétique Moléculaire de Montpellier (IGMM), 34293, Montpellier, France
| | - Gowrishankar Ganesh
- Univ Montpellier, 34090, Montpellier, France
- UM-CNRS Laboratoire d'Informatique de Robotique et de Microelectronique de Montpellier (LIRMM), 161, Rue Ada, 34090, Montpellier, France
| | - Sophie Colomb
- Univ Montpellier, 34090, Montpellier, France
- Equipe de droit pénal et sciences forensiques de Montpellier (EDPFM), Univ. Montpellier, Département de médecine légale, Pôle Urgences, Centre Hospitalo-Universitaire de Montpellier, 371 Avenue du Doyen Gaston Giraud, 34285, Montpellier, France
| | - Raphael Gaudin
- CNRS, Institut de Recherche en Infectiologie de Montpellier (IRIM), 34293, Montpellier, France.
- Univ Montpellier, 34090, Montpellier, France.
| |
Collapse
|
39
|
Nandi S, Bhaduri S, Das D, Ghosh P, Mandal M, Mitra P. Deciphering the Lexicon of Protein Targets: A Review on Multifaceted Drug Discovery in the Era of Artificial Intelligence. Mol Pharm 2024; 21:1563-1590. [PMID: 38466810 DOI: 10.1021/acs.molpharmaceut.3c01161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Understanding protein sequence and structure is essential for understanding protein-protein interactions (PPIs), which are essential for many biological processes and diseases. Targeting protein binding hot spots, which regulate signaling and growth, with rational drug design is promising. Rational drug design uses structural data and computational tools to study protein binding sites and protein interfaces to design inhibitors that can change these interactions, thereby potentially leading to therapeutic approaches. Artificial intelligence (AI), such as machine learning (ML) and deep learning (DL), has advanced drug discovery and design by providing computational resources and methods. Quantum chemistry is essential for drug reactivity, toxicology, drug screening, and quantitative structure-activity relationship (QSAR) properties. This review discusses the methodologies and challenges of identifying and characterizing hot spots and binding sites. It also explores the strategies and applications of artificial-intelligence-based rational drug design technologies that target proteins and protein-protein interaction (PPI) binding hot spots. It provides valuable insights for drug design with therapeutic implications. We have also demonstrated the pathological conditions of heat shock protein 27 (HSP27) and matrix metallopoproteinases (MMP2 and MMP9) and designed inhibitors of these proteins using the drug discovery paradigm in a case study on the discovery of drug molecules for cancer treatment. Additionally, the implications of benzothiazole derivatives for anticancer drug design and discovery are deliberated.
Collapse
Affiliation(s)
- Suvendu Nandi
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India
| | - Soumyadeep Bhaduri
- Centre for Computational and Data Sciences, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India
| | - Debraj Das
- Centre for Computational and Data Sciences, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India
| | - Priya Ghosh
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India
| | - Mahitosh Mandal
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India
| | - Pralay Mitra
- Department of Computer Science and Engineering, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India
| |
Collapse
|
40
|
Saenger JA, Hunger J, Boss A, Richter J. Delayed diagnosis of a transient ischemic attack caused by ChatGPT. Wien Klin Wochenschr 2024; 136:236-238. [PMID: 38305909 PMCID: PMC11006786 DOI: 10.1007/s00508-024-02329-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 01/14/2024] [Indexed: 02/03/2024]
Abstract
Techniques of artificial intelligence (AI) are increasingly used in the treatment of patients, such as providing a diagnosis in radiological imaging, improving workflow by triaging patients or providing an expert opinion based on clinical symptoms; however, such AI techniques also hold intrinsic risks as AI algorithms may point in the wrong direction and constitute a black box without explaining the reason for the decision-making process.This article outlines a case where an erroneous ChatGPT diagnosis, relied upon by the patient to evaluate symptoms, led to a significant treatment delay and a potentially life-threatening situation. With this case, we would like to point out the typical risks posed by the widespread application of AI tools not intended for medical decision-making.
Collapse
Affiliation(s)
- Jonathan A Saenger
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Zurich, Switzerland.
- Institute of Radiology and Nuclear Medicine, GZO Hospital Wetzikon, Wetzikon, Switzerland.
| | - Jonathan Hunger
- Department of Internal Medicine, GZO Hospital Wetzikon, Wetzikon, Switzerland
| | - Andreas Boss
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Zurich, Switzerland.
- Institute of Radiology and Nuclear Medicine, GZO Hospital Wetzikon, Wetzikon, Switzerland.
| | - Johannes Richter
- Institute of Radiology and Nuclear Medicine, GZO Hospital Wetzikon, Wetzikon, Switzerland
- Neurology and Stroke Unit, GZO Hospital Wetzikon, Wetzikon, Switzerland
| |
Collapse
|
41
|
Lopes S, Rocha G, Guimarães-Pereira L. Artificial intelligence and its clinical application in Anesthesiology: a systematic review. J Clin Monit Comput 2024; 38:247-259. [PMID: 37864754 PMCID: PMC10995017 DOI: 10.1007/s10877-023-01088-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Accepted: 10/04/2023] [Indexed: 10/23/2023]
Abstract
PURPOSE Application of artificial intelligence (AI) in medicine is quickly expanding. Despite the amount of evidence and promising results, a thorough overview of the current state of AI in clinical practice of anesthesiology is needed. Therefore, our study aims to systematically review the application of AI in this context. METHODS A systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We searched Medline and Web of Science for articles published up to November 2022 using terms related with AI and clinical practice of anesthesiology. Articles that involved animals, editorials, reviews and sample size lower than 10 patients were excluded. Characteristics and accuracy measures from each study were extracted. RESULTS A total of 46 articles were included in this review. We have grouped them into 4 categories with regard to their clinical applicability: (1) Depth of Anesthesia Monitoring; (2) Image-guided techniques related to Anesthesia; (3) Prediction of events/risks related to Anesthesia; (4) Drug administration control. Each group was analyzed, and the main findings were summarized. Across all fields, the majority of AI methods tested showed superior performance results compared to traditional methods. CONCLUSION AI systems are being integrated into anesthesiology clinical practice, enhancing medical professionals' skills of decision-making, diagnostic accuracy, and therapeutic response.
Collapse
Affiliation(s)
- Sara Lopes
- Department of Anesthesiology, Centro Hospitalar Universitário São João, Porto, Portugal.
| | - Gonçalo Rocha
- Surgery and Physiology Department, Faculty of Medicine, University of Porto, Porto, Portugal
| | - Luís Guimarães-Pereira
- Department of Anesthesiology, Centro Hospitalar Universitário São João, Porto, Portugal
- Surgery and Physiology Department, Faculty of Medicine, University of Porto, Porto, Portugal
| |
Collapse
|
42
|
Chaves ET, Vinayahalingam S, van Nistelrooij N, Xi T, Romero VHD, Flügge T, Saker H, Kim A, Lima GDS, Loomans B, Huysmans MC, Mendes FM, Cenci MS. Detection of caries around restorations on bitewings using deep learning. J Dent 2024; 143:104886. [PMID: 38342368 DOI: 10.1016/j.jdent.2024.104886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/06/2024] [Accepted: 02/08/2024] [Indexed: 02/13/2024] Open
Abstract
OBJECTIVE Secondary caries lesions adjacent to restorations, a leading cause of restoration failure, require accurate diagnostic methods to ensure an optimal treatment outcome. Traditional diagnostic strategies rely on visual inspection complemented by radiographs. Recent advancements in artificial intelligence (AI), particularly deep learning, provide potential improvements in caries detection. This study aimed to develop a convolutional neural network (CNN)-based algorithm for detecting primary caries and secondary caries around restorations using bitewings. METHODS Clinical data from 7 general dental practices in the Netherlands, comprising 425 bitewings of 383 patients, were utilized. The study used the Mask-RCNN architecture, for instance, segmentation, supported by the Swin Transformer backbone. After data augmentation, model training was performed through a ten-fold cross-validation. The diagnostic accuracy of the algorithm was evaluated by calculating the area under the Free-Response Receiver Operating Characteristics curve, sensitivity, precision, and F1 scores. RESULTS The model achieved areas under FROC curves of 0.806 and 0.804, and F1-scores of 0.689 and 0.719 for primary and secondary caries detection, respectively. CONCLUSION An accurate CNN-based automated system was developed to detect primary and secondary caries lesions on bitewings, highlighting a significant advancement in automated caries diagnostics. CLINICAL SIGNIFICANCE An accurate algorithm that integrates the detection of both primary and secondary caries will permit the development of automated systems to aid clinicians in their daily clinical practice.
Collapse
Affiliation(s)
- Eduardo Trota Chaves
- Department of Dentistry, Research Institute for Medical Innovation, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, EX 6525, the Netherlands; Graduate Program in Dentistry, School of Dentistry, Federal University of Pelotas, Pelotas, Brazil.
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Postal Number 590, P.O. Box 9101, Nijmegen, HB 6500, the Netherlands
| | - Niels van Nistelrooij
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Postal Number 590, P.O. Box 9101, Nijmegen, HB 6500, the Netherlands; Department of Oral and Maxillofacial Surgery, Charité Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Augustenburger Platz 1, Berlin 13353, Germany
| | - Tong Xi
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Postal Number 590, P.O. Box 9101, Nijmegen, HB 6500, the Netherlands
| | - Vitor Henrique Digmayer Romero
- Department of Dentistry, Research Institute for Medical Innovation, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, EX 6525, the Netherlands; Graduate Program in Dentistry, School of Dentistry, Federal University of Pelotas, Pelotas, Brazil
| | - Tabea Flügge
- Einstein Center for Digital Future, Wilhelmstraße 67, Berlin 10117, Germany
| | - Hadi Saker
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Postal Number 590, P.O. Box 9101, Nijmegen, HB 6500, the Netherlands
| | - Alexander Kim
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Centre, Postal Number 590, P.O. Box 9101, Nijmegen, HB 6500, the Netherlands
| | - Giana da Silveira Lima
- Graduate Program in Dentistry, School of Dentistry, Federal University of Pelotas, Pelotas, Brazil
| | - Bas Loomans
- Department of Dentistry, Research Institute for Medical Innovation, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, EX 6525, the Netherlands
| | - Marie-Charlotte Huysmans
- Department of Dentistry, Research Institute for Medical Innovation, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, EX 6525, the Netherlands
| | - Fausto Medeiros Mendes
- Department of Dentistry, Research Institute for Medical Innovation, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, EX 6525, the Netherlands; Department of Pediatric Dentistry, School of Dentistry, University of São Paulo, São Paulo, Brazil
| | - Maximiliano Sergio Cenci
- Department of Dentistry, Research Institute for Medical Innovation, Radboud University Medical Center, Philips van Leydenlaan 25, Nijmegen, EX 6525, the Netherlands
| |
Collapse
|
43
|
Melissano G, Tinelli G, Soderlund T. Current Artificial Intelligence Based Chatbots May Produce Inaccurate and Potentially Harmful Information for Patients With Aortic Disease. Eur J Vasc Endovasc Surg 2024; 67:683-684. [PMID: 37952634 DOI: 10.1016/j.ejvs.2023.10.042] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 10/22/2023] [Accepted: 10/28/2023] [Indexed: 11/14/2023]
Affiliation(s)
- Germano Melissano
- Department of Vascular Surgery, Vita-Salute San Raffaele University School of Medicine, IRCCS San Raffaele Hospital, Milan, Italy
| | - Giovanni Tinelli
- Università Cattolica del Sacro Cuore, Rome, Italy; and Unit of Vascular Surgery, Fondazione Policlinico Universitario A. Gemelli - IRCCS, Rome, Italy.
| | - Timo Soderlund
- Aortic Dissection Collaborative Advisory Group, Seattle, WA, USA
| |
Collapse
|
44
|
Langius-Wiffen E, Slotman DJ, Groeneveld J, Ac van Osch J, Nijholt IM, de Boer E, Nijboer-Oosterveld J, Veldhuis WB, de Jong PA, Boomsma MF. External validation of the RSNA 2020 pulmonary embolism detection challenge winning deep learning algorithm. Eur J Radiol 2024; 173:111361. [PMID: 38401407 DOI: 10.1016/j.ejrad.2024.111361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 01/17/2024] [Accepted: 02/08/2024] [Indexed: 02/26/2024]
Abstract
PURPOSE To evaluate the diagnostic performance and generalizability of the winning DL algorithm of the RSNA 2020 PE detection challenge to a local population using CTPA data from two hospitals. MATERIALS AND METHODS Consecutive CTPA images from patients referred for suspected PE were retrospectively analysed. The winning RSNA 2020 DL algorithm was retrained on the RSNA-STR Pulmonary Embolism CT (RSPECT) dataset. The algorithm was tested in hospital A on multidetector CT (MDCT) images of 238 patients and in hospital B on spectral detector CT (SDCT) and virtual monochromatic images (VMI) of 114 patients. The output of the DL algorithm was compared with a reference standard, which included a consensus reading by at least two experienced cardiothoracic radiologists for both hospitals. Areas under the receiver operating characteristic curve (AUCs) were calculated. Sensitivity and specificity were determined using the maximum Youden index. RESULTS According to the reference standard, PE was present in 73 patients (30.7%) in hospital A and 33 patients (29.0%) in hospital B. For the DL algorithm the AUC was 0.96 (95% CI 0.92-0.98) in hospital A, 0.89 (95% CI 0.81-0.94) for conventional reconstruction in hospital B and 0.87 (95% CI 0.80-0.93) for VMI. CONCLUSION The RSNA 2020 pulmonary embolism detection on CTPA challenge winning DL algorithm, retrained on the RSPECT dataset, showed high diagnostic accuracy on MDCT images. A somewhat lower performance was observed on SDCT images, which suggest additional training on novel CT technology may improve generalizability of this DL algorithm.
Collapse
Affiliation(s)
| | - Derk J Slotman
- Department of Radiology, Isala Hospital, Zwolle, the Netherlands; Department of Radiology, University Medical Centre Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Jorik Groeneveld
- Department of Radiology, Isala Hospital, Zwolle, the Netherlands
| | | | - Ingrid M Nijholt
- Department of Radiology, Isala Hospital, Zwolle, the Netherlands; Department of Radiology, University Medical Centre Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Erwin de Boer
- Department of Radiology, Isala Hospital, Zwolle, the Netherlands
| | | | - Wouter B Veldhuis
- Department of Radiology, University Medical Centre Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Pim A de Jong
- Department of Radiology, University Medical Centre Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Martijn F Boomsma
- Department of Radiology, Isala Hospital, Zwolle, the Netherlands; Department of Radiology, University Medical Centre Utrecht, Utrecht University, Utrecht, the Netherlands
| |
Collapse
|
45
|
Aguero D, Nelson SD. The Potential Application of Large Language Models in Pharmaceutical Supply Chain Management. J Pediatr Pharmacol Ther 2024; 29:200-205. [PMID: 38596417 PMCID: PMC11001215 DOI: 10.5863/1551-6776-29.2.200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 01/04/2024] [Indexed: 04/11/2024]
Affiliation(s)
- David Aguero
- Department of Pharmacy and Pharmaceutical Sciences (DA), St. Jude Children’s Research Hospital, TN
| | - Scott D. Nelson
- Department of Biomedical Informatics (SDN), Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
46
|
Aquino GJ, Mastrodicasa D, Alabed S, Abohashem S, Wen L, Gill RR, Bardo DME, Abbara S, Hanneman K. Radiology: Cardiothoracic Imaging Highlights 2023. Radiol Cardiothorac Imaging 2024; 6:e240020. [PMID: 38602468 DOI: 10.1148/ryct.240020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Radiology: Cardiothoracic Imaging publishes novel research and technical developments in cardiac, thoracic, and vascular imaging. The journal published many innovative studies during 2023 and achieved an impact factor for the first time since its inaugural issue in 2019, with an impact factor of 7.0. The current review article, led by the Radiology: Cardiothoracic Imaging trainee editorial board, highlights the most impactful articles published in the journal between November 2022 and October 2023. The review encompasses various aspects of coronary CT, photon-counting detector CT, PET/MRI, cardiac MRI, congenital heart disease, vascular imaging, thoracic imaging, artificial intelligence, and health services research. Key highlights include the potential for photon-counting detector CT to reduce contrast media volumes, utility of combined PET/MRI in the evaluation of cardiac sarcoidosis, the prognostic value of left atrial late gadolinium enhancement at MRI in predicting incident atrial fibrillation, the utility of an artificial intelligence tool to optimize detection of incidental pulmonary embolism, and standardization of medical terminology for cardiac CT. Ongoing research and future directions include evaluation of novel PET tracers for assessment of myocardial fibrosis, deployment of AI tools in clinical cardiovascular imaging workflows, and growing awareness of the need to improve environmental sustainability in imaging. Keywords: Coronary CT, Photon-counting Detector CT, PET/MRI, Cardiac MRI, Congenital Heart Disease, Vascular Imaging, Thoracic Imaging, Artificial Intelligence, Health Services Research © RSNA, 2024.
Collapse
Affiliation(s)
- Gilberto J Aquino
- From the Department of Radiology, SUNY Upstate Medical University, 750 E Adams St, Syracuse, NY, 13210 (G.J.A); Department of Radiology, University of Washington School of Medicine, UW Medical Center Montlake, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington School of Medicine, Seattle, Wash (D.M.); Division of Clinical Medicine, School of Medicine and Population Health, University of Sheffield, Sheffield, United Kingdom (S. Alabed); National Institute for Health and Care Research, Sheffield Biomedical Research Centre, Sheffield, United Kingdom (S. Alabed); Department of Radiology, Cardiovascular Imaging Research Center, Massachusetts General Hospital and Harvard Medical School, Boston, Mass (S. Abohashem); Department of Radiology, Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, West China Second University Hospital, Sichuan University, Sichuan, China (L.W.); Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Mass (R.R.G.); Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (D.M.E.B.); Department of Radiology, UT Southwestern Medical Center, Dallas, Tex (S. Abbara); Department of Medical Imaging, University Medical Imaging Toronto, University of Toronto, Toronto, Ontario, Canada (K.H.); and Peter Munk Cardiac Centre, Toronto General Hospital, University of Toronto, Toronto, Ontario, Canada (K.H.)
| | - Domenico Mastrodicasa
- From the Department of Radiology, SUNY Upstate Medical University, 750 E Adams St, Syracuse, NY, 13210 (G.J.A); Department of Radiology, University of Washington School of Medicine, UW Medical Center Montlake, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington School of Medicine, Seattle, Wash (D.M.); Division of Clinical Medicine, School of Medicine and Population Health, University of Sheffield, Sheffield, United Kingdom (S. Alabed); National Institute for Health and Care Research, Sheffield Biomedical Research Centre, Sheffield, United Kingdom (S. Alabed); Department of Radiology, Cardiovascular Imaging Research Center, Massachusetts General Hospital and Harvard Medical School, Boston, Mass (S. Abohashem); Department of Radiology, Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, West China Second University Hospital, Sichuan University, Sichuan, China (L.W.); Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Mass (R.R.G.); Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (D.M.E.B.); Department of Radiology, UT Southwestern Medical Center, Dallas, Tex (S. Abbara); Department of Medical Imaging, University Medical Imaging Toronto, University of Toronto, Toronto, Ontario, Canada (K.H.); and Peter Munk Cardiac Centre, Toronto General Hospital, University of Toronto, Toronto, Ontario, Canada (K.H.)
| | - Samer Alabed
- From the Department of Radiology, SUNY Upstate Medical University, 750 E Adams St, Syracuse, NY, 13210 (G.J.A); Department of Radiology, University of Washington School of Medicine, UW Medical Center Montlake, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington School of Medicine, Seattle, Wash (D.M.); Division of Clinical Medicine, School of Medicine and Population Health, University of Sheffield, Sheffield, United Kingdom (S. Alabed); National Institute for Health and Care Research, Sheffield Biomedical Research Centre, Sheffield, United Kingdom (S. Alabed); Department of Radiology, Cardiovascular Imaging Research Center, Massachusetts General Hospital and Harvard Medical School, Boston, Mass (S. Abohashem); Department of Radiology, Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, West China Second University Hospital, Sichuan University, Sichuan, China (L.W.); Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Mass (R.R.G.); Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (D.M.E.B.); Department of Radiology, UT Southwestern Medical Center, Dallas, Tex (S. Abbara); Department of Medical Imaging, University Medical Imaging Toronto, University of Toronto, Toronto, Ontario, Canada (K.H.); and Peter Munk Cardiac Centre, Toronto General Hospital, University of Toronto, Toronto, Ontario, Canada (K.H.)
| | - Shady Abohashem
- From the Department of Radiology, SUNY Upstate Medical University, 750 E Adams St, Syracuse, NY, 13210 (G.J.A); Department of Radiology, University of Washington School of Medicine, UW Medical Center Montlake, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington School of Medicine, Seattle, Wash (D.M.); Division of Clinical Medicine, School of Medicine and Population Health, University of Sheffield, Sheffield, United Kingdom (S. Alabed); National Institute for Health and Care Research, Sheffield Biomedical Research Centre, Sheffield, United Kingdom (S. Alabed); Department of Radiology, Cardiovascular Imaging Research Center, Massachusetts General Hospital and Harvard Medical School, Boston, Mass (S. Abohashem); Department of Radiology, Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, West China Second University Hospital, Sichuan University, Sichuan, China (L.W.); Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Mass (R.R.G.); Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (D.M.E.B.); Department of Radiology, UT Southwestern Medical Center, Dallas, Tex (S. Abbara); Department of Medical Imaging, University Medical Imaging Toronto, University of Toronto, Toronto, Ontario, Canada (K.H.); and Peter Munk Cardiac Centre, Toronto General Hospital, University of Toronto, Toronto, Ontario, Canada (K.H.)
| | - Lingyi Wen
- From the Department of Radiology, SUNY Upstate Medical University, 750 E Adams St, Syracuse, NY, 13210 (G.J.A); Department of Radiology, University of Washington School of Medicine, UW Medical Center Montlake, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington School of Medicine, Seattle, Wash (D.M.); Division of Clinical Medicine, School of Medicine and Population Health, University of Sheffield, Sheffield, United Kingdom (S. Alabed); National Institute for Health and Care Research, Sheffield Biomedical Research Centre, Sheffield, United Kingdom (S. Alabed); Department of Radiology, Cardiovascular Imaging Research Center, Massachusetts General Hospital and Harvard Medical School, Boston, Mass (S. Abohashem); Department of Radiology, Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, West China Second University Hospital, Sichuan University, Sichuan, China (L.W.); Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Mass (R.R.G.); Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (D.M.E.B.); Department of Radiology, UT Southwestern Medical Center, Dallas, Tex (S. Abbara); Department of Medical Imaging, University Medical Imaging Toronto, University of Toronto, Toronto, Ontario, Canada (K.H.); and Peter Munk Cardiac Centre, Toronto General Hospital, University of Toronto, Toronto, Ontario, Canada (K.H.)
| | - Ritu R Gill
- From the Department of Radiology, SUNY Upstate Medical University, 750 E Adams St, Syracuse, NY, 13210 (G.J.A); Department of Radiology, University of Washington School of Medicine, UW Medical Center Montlake, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington School of Medicine, Seattle, Wash (D.M.); Division of Clinical Medicine, School of Medicine and Population Health, University of Sheffield, Sheffield, United Kingdom (S. Alabed); National Institute for Health and Care Research, Sheffield Biomedical Research Centre, Sheffield, United Kingdom (S. Alabed); Department of Radiology, Cardiovascular Imaging Research Center, Massachusetts General Hospital and Harvard Medical School, Boston, Mass (S. Abohashem); Department of Radiology, Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, West China Second University Hospital, Sichuan University, Sichuan, China (L.W.); Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Mass (R.R.G.); Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (D.M.E.B.); Department of Radiology, UT Southwestern Medical Center, Dallas, Tex (S. Abbara); Department of Medical Imaging, University Medical Imaging Toronto, University of Toronto, Toronto, Ontario, Canada (K.H.); and Peter Munk Cardiac Centre, Toronto General Hospital, University of Toronto, Toronto, Ontario, Canada (K.H.)
| | - Dianna M E Bardo
- From the Department of Radiology, SUNY Upstate Medical University, 750 E Adams St, Syracuse, NY, 13210 (G.J.A); Department of Radiology, University of Washington School of Medicine, UW Medical Center Montlake, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington School of Medicine, Seattle, Wash (D.M.); Division of Clinical Medicine, School of Medicine and Population Health, University of Sheffield, Sheffield, United Kingdom (S. Alabed); National Institute for Health and Care Research, Sheffield Biomedical Research Centre, Sheffield, United Kingdom (S. Alabed); Department of Radiology, Cardiovascular Imaging Research Center, Massachusetts General Hospital and Harvard Medical School, Boston, Mass (S. Abohashem); Department of Radiology, Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, West China Second University Hospital, Sichuan University, Sichuan, China (L.W.); Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Mass (R.R.G.); Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (D.M.E.B.); Department of Radiology, UT Southwestern Medical Center, Dallas, Tex (S. Abbara); Department of Medical Imaging, University Medical Imaging Toronto, University of Toronto, Toronto, Ontario, Canada (K.H.); and Peter Munk Cardiac Centre, Toronto General Hospital, University of Toronto, Toronto, Ontario, Canada (K.H.)
| | - Suhny Abbara
- From the Department of Radiology, SUNY Upstate Medical University, 750 E Adams St, Syracuse, NY, 13210 (G.J.A); Department of Radiology, University of Washington School of Medicine, UW Medical Center Montlake, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington School of Medicine, Seattle, Wash (D.M.); Division of Clinical Medicine, School of Medicine and Population Health, University of Sheffield, Sheffield, United Kingdom (S. Alabed); National Institute for Health and Care Research, Sheffield Biomedical Research Centre, Sheffield, United Kingdom (S. Alabed); Department of Radiology, Cardiovascular Imaging Research Center, Massachusetts General Hospital and Harvard Medical School, Boston, Mass (S. Abohashem); Department of Radiology, Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, West China Second University Hospital, Sichuan University, Sichuan, China (L.W.); Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Mass (R.R.G.); Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (D.M.E.B.); Department of Radiology, UT Southwestern Medical Center, Dallas, Tex (S. Abbara); Department of Medical Imaging, University Medical Imaging Toronto, University of Toronto, Toronto, Ontario, Canada (K.H.); and Peter Munk Cardiac Centre, Toronto General Hospital, University of Toronto, Toronto, Ontario, Canada (K.H.)
| | - Kate Hanneman
- From the Department of Radiology, SUNY Upstate Medical University, 750 E Adams St, Syracuse, NY, 13210 (G.J.A); Department of Radiology, University of Washington School of Medicine, UW Medical Center Montlake, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core (TIMC), University of Washington School of Medicine, Seattle, Wash (D.M.); Division of Clinical Medicine, School of Medicine and Population Health, University of Sheffield, Sheffield, United Kingdom (S. Alabed); National Institute for Health and Care Research, Sheffield Biomedical Research Centre, Sheffield, United Kingdom (S. Alabed); Department of Radiology, Cardiovascular Imaging Research Center, Massachusetts General Hospital and Harvard Medical School, Boston, Mass (S. Abohashem); Department of Radiology, Key Laboratory of Birth Defects and Related Diseases of Women and Children, Ministry of Education, West China Second University Hospital, Sichuan University, Sichuan, China (L.W.); Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Mass (R.R.G.); Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill (D.M.E.B.); Department of Radiology, UT Southwestern Medical Center, Dallas, Tex (S. Abbara); Department of Medical Imaging, University Medical Imaging Toronto, University of Toronto, Toronto, Ontario, Canada (K.H.); and Peter Munk Cardiac Centre, Toronto General Hospital, University of Toronto, Toronto, Ontario, Canada (K.H.)
| |
Collapse
|
47
|
Mundinger A, Mundinger C. Artificial Intelligence in Senology - Where Do We Stand and What Are the Future Horizons? Eur J Breast Health 2024; 20:73-80. [PMID: 38571686 PMCID: PMC10985572 DOI: 10.4274/ejbh.galenos.2024.2023-12-13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 01/16/2024] [Indexed: 04/05/2024]
Abstract
Artificial Intelligence (AI) is defined as the simulation of human intelligence by a digital computer or robotic system and has become a hype in current conversations. A subcategory of AI is deep learning, which is based on complex artificial neural networks that mimic the principles of human synaptic plasticity and layered brain architectures, and uses large-scale data processing. AI-based image analysis in breast screening programmes has shown non-inferior sensitivity, reduces workload by up to 70% by pre-selecting normal cases, and reduces recall by 25% compared to human double reading. Natural language programs such as ChatGPT (OpenAI) achieve 80% and higher accuracy in advising and decision making compared to the gold standard: human judgement. This does not yet meet the necessary requirements for medical products in terms of patient safety. The main advantage of AI is that it can perform routine but complex tasks much faster and with fewer errors than humans. The main concerns in healthcare are the stability of AI systems, cybersecurity, liability and transparency. More widespread use of AI could affect human jobs in healthcare and increase technological dependency. AI in senology is just beginning to evolve towards better forms with improved properties. Responsible training of AI systems with meaningful raw data and scientific studies to analyse their performance in the real world are necessary to keep AI on track. To mitigate significant risks, it will be necessary to balance active promotion and development of quality-assured AI systems with careful regulation. AI regulation has only recently included in transnational legal frameworks, as the European Union's AI Act was the first comprehensive legal framework to be published, in December 2023. Unacceptable AI systems will be banned if they are deemed to pose a clear threat to people's fundamental rights. Using AI and combining it with human wisdom, empathy and affection will be the method of choice for further, fruitful development of tomorrow's senology.
Collapse
Affiliation(s)
- Alexander Mundinger
- Breast Imaging and Interventions; Breast Centre Osnabrück; FHH Niels-Stensen-Kliniken; Franziskus-Hospital Harderberg, Georgsmarienhütte, Germany
| | - Carolin Mundinger
- Department of Behavioural Biology, Institute for Neuro- and Behavioural Biology, University of Muenster, Muenster, Germany
| |
Collapse
|
48
|
Abou-Abdallah M, Dar T, Mahmudzade Y, Michaels J, Talwar R, Tornari C. The quality and readability of patient information provided by ChatGPT: can AI reliably explain common ENT operations? Eur Arch Otorhinolaryngol 2024:10.1007/s00405-024-08598-w. [PMID: 38530460 DOI: 10.1007/s00405-024-08598-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
PURPOSE Access to high-quality and comprehensible patient information is crucial. However, information provided by increasingly prevalent Artificial Intelligence tools has not been thoroughly investigated. This study assesses the quality and readability of information from ChatGPT regarding three index ENT operations: tonsillectomy, adenoidectomy, and grommets. METHODS We asked ChatGPT standard and simplified questions. Readability was calculated using Flesch-Kincaid Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI) and Simple Measure of Gobbledygook (SMOG) scores. We assessed quality using the DISCERN instrument and compared these with ENT UK patient leaflets. RESULTS ChatGPT readability was poor, with mean FRES of 38.9 and 55.1 pre- and post-simplification, respectively. Simplified information from ChatGPT was 43.6% more readable (FRES) but scored 11.6% lower for quality. ENT UK patient information readability and quality was consistently higher. CONCLUSIONS ChatGPT can simplify information at the expense of quality, resulting in shorter answers with important omissions. Limitations in knowledge and insight curb its reliability for healthcare information. Patients should use reputable sources from professional organisations alongside clear communication with their clinicians for well-informed consent and making decisions.
Collapse
Affiliation(s)
- Michel Abou-Abdallah
- Ear, Nose and Throat Department, Luton and Dunstable University Hospital, Lewsey Rd, Luton, LU4 0DZ, UK.
| | - Talib Dar
- Ear, Nose and Throat Department, Luton and Dunstable University Hospital, Lewsey Rd, Luton, LU4 0DZ, UK
| | - Yasamin Mahmudzade
- Foundation Programme, East and North Hertfordshire NHS Trust, Stevenage, UK
| | - Joshua Michaels
- Ear, Nose and Throat Department, Luton and Dunstable University Hospital, Lewsey Rd, Luton, LU4 0DZ, UK
| | - Rishi Talwar
- Ear, Nose and Throat Department, Luton and Dunstable University Hospital, Lewsey Rd, Luton, LU4 0DZ, UK
| | - Chrysostomos Tornari
- Ear, Nose and Throat Department, Luton and Dunstable University Hospital, Lewsey Rd, Luton, LU4 0DZ, UK
| |
Collapse
|
49
|
Nogales A, García-Tejedor ÁJ, Serrano Vara J, Ugalde-Canitrot A. eDeeplepsy: An artificial neural framework to reveal different brain states in children with epileptic spasms. Epilepsy Behav 2024; 154:109744. [PMID: 38513569 DOI: 10.1016/j.yebeh.2024.109744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/11/2024] [Accepted: 03/10/2024] [Indexed: 03/23/2024]
Abstract
OBJECTIVE Despite advances, analysis and interpretation of EEG still essentially rely on visual inspection by a super-specialized physician. Considering the vast amount of data that composes the EEG, much of the detail inevitably escapes ordinary human scrutiny. Significant information may not be evident and is missed, and misinterpretation remains a serious problem. Can we develop an artificial intelligence system to accurately and efficiently classify EEG and even reveal novel information? In this study, deep learning techniques and, in particular, Convolutional Neural Networks, have been used to develop a model (which we have named eDeeplepsy) for distinguishing different brain states in children with epilepsy. METHODS A novel EEG database from a homogenous pediatric population with epileptic spasms beyond infancy was constituted by epileptologists, representing a particularly intriguing seizure type and challenging EEG. The analysis was performed on such samples from long-term video-EEG recordings, previously coded as images showing how different parts of the epileptic brain are distinctly activated during varying states within and around this seizure type. RESULTS Results show that not only could eDeeplepsy differentiate ictal from interictal states but also discriminate brain activity between spasms within a cluster from activity away from clusters, usually undifferentiated by visual inspection. Accuracies between 86 % and 94 % were obtained for the proposed use cases. SIGNIFICANCE We present a model for computer-assisted discrimination that can consistently detect subtle differences in the various brain states of children with epileptic spasms, and which can be used in other settings in epilepsy with the purpose of reducing workload and discrepancies or misinterpretations. The research also reveals previously undisclosed information that allows for a better understanding of the pathophysiology and evolving characteristics of this particular seizure type. It does so by documenting a different state (interspasms) that indicates a potentially non-standard signal with distinctive epileptogenicity at that period.
Collapse
Affiliation(s)
- Alberto Nogales
- CEIEC Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km. 1,800, Pozuelo de Alarcón 28223, Spain.
| | - Álvaro J García-Tejedor
- CEIEC Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km. 1,800, Pozuelo de Alarcón 28223, Spain.
| | - Juan Serrano Vara
- CEIEC Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km. 1,800, Pozuelo de Alarcón 28223, Spain.
| | - Arturo Ugalde-Canitrot
- School of Medicine. Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km. 1,800, Pozuelo de Alarcón 28223, Spain; Epilepsy Unit, Neurology and Clinical Neurophysiology Service, Hospital Universitario La Paz, Paseo de la Castellana, 261, Madrid 28046, Spain.
| |
Collapse
|
50
|
Mantilla D, D Vera D, Ortiz AF, Piergallini L, Lara JJ, Nicoud F, Vargas O, Costalat V. Optimizing Patient Care: A Multicentric Study on the Clinical Impact of Sim&Size™ Simulation Software in Intracranial Aneurysm Treatment With Pipeline Embolization Devices. World Neurosurg 2024:S1878-8750(24)00437-6. [PMID: 38508386 DOI: 10.1016/j.wneu.2024.03.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 03/11/2024] [Accepted: 03/12/2024] [Indexed: 03/22/2024]
Abstract
BACKGROUND To determine the clinical effects (stent size, and number of stents used) of the Sim&Size™ simulation software on the endovascular treatment of unruptured saccular intracranial aneurysms with Pipeline Embolization Devices (PED). METHODS This study is a retrospective analytical multicenter study of patients treated with PED (Flex and Flex with SHIELD) for intracranial aneurysm in FOSCAL clinic and CHU de Montpellier. RESULTS The study included 253 patients, of which 75 were treated in Colombia and 178 were treated in France. The majority of patients were women (83.8%), with a median age of 57.48 years, and had large vessel location (88.1%), with most aneurysms located in the ICA paraclinoid segment (56.8%). Patients in the group with Sim&Size™ simulation had shorter stents than those without simulation (15.62 mm versus 17.36 mm, P-value = 0.001). Also, a lower proportion of these patients required more than one stent (1.4% versus 7.3%, P-value = 0.022). There were 7 complications reported in the group that used the Sim&Size™ simulation software, compared to 9 complications in the group that did not use the software. CONCLUSIONS Using Sim&Size™ simulation software for endovascular treatment of patients with intracranial aneurysms using PED reduces the stent length and decreasing the number of devices needed per treatment.
Collapse
Affiliation(s)
- Daniel Mantilla
- Interventional radiology Department, Fundación oftalmológica de Santander - Clínica Ardila Lülle, Floridablanca, Colombia; Interventional radiology Department, Universidad Autónoma de Bucaramanga, Bucaramanga, Colombia; Neuroradiology, Hôpital Güi-de-Chauliac, CHU de Montpellier, Montpellier, France
| | - Daniela D Vera
- Interventional radiology Department, Fundación oftalmológica de Santander - Clínica Ardila Lülle, Floridablanca, Colombia.
| | - Andrés Felipe Ortiz
- Interventional radiology Department, Fundación oftalmológica de Santander - Clínica Ardila Lülle, Floridablanca, Colombia; Interventional radiology Department, Universidad Autónoma de Bucaramanga, Bucaramanga, Colombia
| | | | - Juan José Lara
- Interventional radiology Department, Universidad Autónoma de Bucaramanga, Bucaramanga, Colombia
| | - Franck Nicoud
- Institut Montpelliérain Alexander, Grothendieck, CNRS, Univ. Montpellier, Montpellier, France
| | - Oliverio Vargas
- Interventional radiology Department, Fundación oftalmológica de Santander - Clínica Ardila Lülle, Floridablanca, Colombia; Interventional radiology Department, Universidad Autónoma de Bucaramanga, Bucaramanga, Colombia
| | - Vincent Costalat
- Neuroradiology, Hôpital Güi-de-Chauliac, CHU de Montpellier, Montpellier, France
| |
Collapse
|