1
|
Garajová L, Garbe S, Sprinkart AM. [Artificial intelligence in diagnostic radiology for dose management : Advances and perspectives using the example of computed tomography]. RADIOLOGIE (HEIDELBERG, GERMANY) 2024:10.1007/s00117-024-01330-z. [PMID: 38877140 DOI: 10.1007/s00117-024-01330-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/29/2024] [Indexed: 06/16/2024]
Abstract
CLINICAL-METHODOLOGICAL PROBLEM Imaging procedures employing ionizing radiation require compliance with European directives and national regulations in order to protect patients. Each exposure must be indicated, individually adapted, and documented. Unacceptable dose exceedances must be detected and reported. These tasks are time-consuming and require meticulous diligence. STANDARD RADIOLOGICAL METHODS Computed tomography (CT) is the most important contributor to medical radiation exposure. Optimizing the patient's dose is therefore mandatory. Use of modern technology and reconstruction algorithms already reduces exposure. Checking the indication, planning, and performing the examination are further important process steps with regard to radiation protection. Patient exposure is usually monitored by dose management systems (DMS). In special cases, a risk assessment is required by calculating the organ doses. METHODOLOGICAL INNOVATIONS Artificial intelligence (AI)-assisted techniques are increasingly used in various steps of the process: they support examination planning, improve patient positioning, and enable automated scan length adjustments. They also provide real-time estimates of individual organ doses. EVALUATION The integration of AI into medical imaging is proving successful in terms of dose optimization in various areas of the radiological workflow, from reconstruction to examination planning and performing exams. However, the use of AI in conjunction with DMS has not yet been considered on a large scale. PRACTICAL RECOMMENDATION AI processes offer promising tools to support dose management. However, their implementation in the clinical setting requires further research, extensive validation, and continuous monitoring.
Collapse
Affiliation(s)
- Laura Garajová
- Klinik für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Bonn, Venusberg-Campus 1, 53127, Bonn, Deutschland
| | - Stephan Garbe
- Klinik für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Bonn, Venusberg-Campus 1, 53127, Bonn, Deutschland
- Klinik für Strahlentherapie und Radioonkologie, Universitätsklinikum Bonn, Bonn, Deutschland
| | - Alois M Sprinkart
- Klinik für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Bonn, Venusberg-Campus 1, 53127, Bonn, Deutschland.
| |
Collapse
|
2
|
Katwaroo AR, Adesh VS, Lowtan A, Umakanthan S. The diagnostic, therapeutic, and ethical impact of artificial intelligence in modern medicine. Postgrad Med J 2024; 100:289-296. [PMID: 38159301 DOI: 10.1093/postmj/qgad135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 12/02/2023] [Indexed: 01/03/2024]
Abstract
In the evolution of modern medicine, artificial intelligence (AI) has been proven to provide an integral aspect of revolutionizing clinical diagnosis, drug discovery, and patient care. With the potential to scrutinize colossal amounts of medical data, radiological and histological images, and genomic data in healthcare institutions, AI-powered systems can recognize, determine, and associate patterns and provide impactful insights that would be strenuous and challenging for clinicians to detect during their daily clinical practice. The outcome of AI-mediated search offers more accurate, personalized patient diagnoses, guides in research for new drug therapies, and provides a more effective multidisciplinary treatment plan that can be implemented for patients with chronic diseases. Among the many promising applications of AI in modern medicine, medical imaging stands out distinctly as an area with tremendous potential. AI-powered algorithms can now accurately and sensitively identify cancer cells and other lesions in medical images with greater accuracy and sensitivity. This allows for earlier diagnosis and treatment, which can significantly impact patient outcomes. This review provides a comprehensive insight into diagnostic, therapeutic, and ethical issues with the advent of AI in modern medicine.
Collapse
Affiliation(s)
- Arun Rabindra Katwaroo
- Department of Medicine, Trinidad Institute of Medical Technology, St Augustine, Trinidad and Tobago
| | | | - Amrita Lowtan
- Department of Preclinical Sciences, Faculty of Medical Sciences, The University of the West Indies, St. Augustine, Trinidad and Tobago
| | - Srikanth Umakanthan
- Department of Paraclinical Sciences, Faculty of Medical Sciences, The University of the West Indies, St. Augustine, Trinidad and Tobago
| |
Collapse
|
3
|
Hirosawa T, Harada Y, Tokumasu K, Ito T, Suzuki T, Shimizu T. Evaluating ChatGPT-4's Diagnostic Accuracy: Impact of Visual Data Integration. JMIR Med Inform 2024; 12:e55627. [PMID: 38592758 DOI: 10.2196/55627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 02/14/2024] [Accepted: 03/13/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND In the evolving field of health care, multimodal generative artificial intelligence (AI) systems, such as ChatGPT-4 with vision (ChatGPT-4V), represent a significant advancement, as they integrate visual data with text data. This integration has the potential to revolutionize clinical diagnostics by offering more comprehensive analysis capabilities. However, the impact on diagnostic accuracy of using image data to augment ChatGPT-4 remains unclear. OBJECTIVE This study aims to assess the impact of adding image data on ChatGPT-4's diagnostic accuracy and provide insights into how image data integration can enhance the accuracy of multimodal AI in medical diagnostics. Specifically, this study endeavored to compare the diagnostic accuracy between ChatGPT-4V, which processed both text and image data, and its counterpart, ChatGPT-4, which only uses text data. METHODS We identified a total of 557 case reports published in the American Journal of Case Reports from January 2022 to March 2023. After excluding cases that were nondiagnostic, pediatric, and lacking image data, we included 363 case descriptions with their final diagnoses and associated images. We compared the diagnostic accuracy of ChatGPT-4V and ChatGPT-4 without vision based on their ability to include the final diagnoses within differential diagnosis lists. Two independent physicians evaluated their accuracy, with a third resolving any discrepancies, ensuring a rigorous and objective analysis. RESULTS The integration of image data into ChatGPT-4V did not significantly enhance diagnostic accuracy, showing that final diagnoses were included in the top 10 differential diagnosis lists at a rate of 85.1% (n=309), comparable to the rate of 87.9% (n=319) for the text-only version (P=.33). Notably, ChatGPT-4V's performance in correctly identifying the top diagnosis was inferior, at 44.4% (n=161), compared with 55.9% (n=203) for the text-only version (P=.002, χ2 test). Additionally, ChatGPT-4's self-reports showed that image data accounted for 30% of the weight in developing the differential diagnosis lists in more than half of cases. CONCLUSIONS Our findings reveal that currently, ChatGPT-4V predominantly relies on textual data, limiting its ability to fully use the diagnostic potential of visual information. This study underscores the need for further development of multimodal generative AI systems to effectively integrate and use clinical image data. Enhancing the diagnostic performance of such AI systems through improved multimodal data integration could significantly benefit patient care by providing more accurate and comprehensive diagnostic insights. Future research should focus on overcoming these limitations, paving the way for the practical application of advanced AI in medicine.
Collapse
Affiliation(s)
- Takanobu Hirosawa
- Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Shimotsuga, Japan
| | - Yukinori Harada
- Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Shimotsuga, Japan
| | - Kazuki Tokumasu
- Department of General Medicine, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | | | - Tomoharu Suzuki
- Department of Hospital Medicine, Urasoe General Hospital, Okinawa, Japan
| | - Taro Shimizu
- Department of Diagnostic and Generalist Medicine, Dokkyo Medical University, Shimotsuga, Japan
| |
Collapse
|
4
|
Bolderston A. Message from the Editor. J Med Imaging Radiat Sci 2024; 55:1-3. [PMID: 38485296 DOI: 10.1016/j.jmir.2024.01.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
|
5
|
Edelmers E, Kazoka D, Bolocko K, Sudars K, Pilmane M. Automatization of CT Annotation: Combining AI Efficiency with Expert Precision. Diagnostics (Basel) 2024; 14:185. [PMID: 38248062 PMCID: PMC10814874 DOI: 10.3390/diagnostics14020185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 01/12/2024] [Accepted: 01/13/2024] [Indexed: 01/23/2024] Open
Abstract
The integration of artificial intelligence (AI), particularly through machine learning (ML) and deep learning (DL) algorithms, marks a transformative progression in medical imaging diagnostics. This technical note elucidates a novel methodology for semantic segmentation of the vertebral column in CT scans, exemplified by a dataset of 250 patients from Riga East Clinical University Hospital. Our approach centers on the accurate identification and labeling of individual vertebrae, ranging from C1 to the sacrum-coccyx complex. Patient selection was meticulously conducted, ensuring demographic balance in age and sex, and excluding scans with significant vertebral abnormalities to reduce confounding variables. This strategic selection bolstered the representativeness of our sample, thereby enhancing the external validity of our findings. Our workflow streamlined the segmentation process by eliminating the need for volume stitching, aligning seamlessly with the methodology we present. By leveraging AI, we have introduced a semi-automated annotation system that enables initial data labeling even by individuals without medical expertise. This phase is complemented by thorough manual validation against established anatomical standards, significantly reducing the time traditionally required for segmentation. This dual approach not only conserves resources but also expedites project timelines. While this method significantly advances radiological data annotation, it is not devoid of challenges, such as the necessity for manual validation by anatomically skilled personnel and reliance on specialized GPU hardware. Nonetheless, our methodology represents a substantial leap forward in medical data semantic segmentation, highlighting the potential of AI-driven approaches to revolutionize clinical and research practices in radiology.
Collapse
Affiliation(s)
- Edgars Edelmers
- Institute of Anatomy and Anthropology, Rīga Stradiņš University, LV-1010 Riga, Latvia; (D.K.); (M.P.)
| | - Dzintra Kazoka
- Institute of Anatomy and Anthropology, Rīga Stradiņš University, LV-1010 Riga, Latvia; (D.K.); (M.P.)
| | - Katrina Bolocko
- Department of Computer Graphics and Computer Vision, Riga Technical University, LV-1048 Riga, Latvia;
| | - Kaspars Sudars
- Institute of Electronics and Computer Science, LV-1006 Riga, Latvia;
| | - Mara Pilmane
- Institute of Anatomy and Anthropology, Rīga Stradiņš University, LV-1010 Riga, Latvia; (D.K.); (M.P.)
| |
Collapse
|
6
|
Li S, Liu X, Chen X, Xu H, Zhang Y, Qian W. Development and Validation of an Artificial Intelligence Preoperative Planning and Patient-Specific Instrumentation System for Total Knee Arthroplasty. Bioengineering (Basel) 2023; 10:1417. [PMID: 38136008 PMCID: PMC10740483 DOI: 10.3390/bioengineering10121417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 11/29/2023] [Accepted: 12/12/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND Accurate preoperative planning for total knee arthroplasty (TKA) is crucial. Computed tomography (CT)-based preoperative planning offers more comprehensive information and can also be used to design patient-specific instrumentation (PSI), but it requires well-reconstructed and segmented images, and the process is complex and time-consuming. This study aimed to develop an artificial intelligence (AI) preoperative planning and PSI system for TKA and to validate its time savings and accuracy in clinical applications. METHODS The 3D-UNet and modified HRNet neural network structures were used to develop the AI preoperative planning and PSI system (AIJOINT). Forty-two patients who were scheduled for TKA underwent both AI and manual CT processing and planning for component sizing, 20 of whom had their PSIs designed and applied intraoperatively. The time consumed and the size and orientation of the postoperative component were recorded. RESULTS The Dice similarity coefficient (DSC) and loss function indicated excellent performance of the neural network structure in CT image segmentation. AIJOINT was faster than conventional methods for CT segmentation (3.74 ± 0.82 vs. 128.88 ± 17.31 min, p < 0.05) and PSI design (35.10 ± 3.98 vs. 159.52 ± 17.14 min, p < 0.05) without increasing the time for size planning. The accuracy of AIJOINT in planning the size of both femoral and tibial components was 92.9%, while the accuracy of the conventional method in planning the size of the femoral and tibial components was 42.9% and 47.6%, respectively (p < 0.05). In addition, AI-based PSI improved the accuracy of the hip-knee-ankle angle and reduced postoperative blood loss (p < 0.05). CONCLUSION AIJOINT significantly reduces the time needed for CT processing and PSI design without increasing the time for size planning, accurately predicts the component size, and improves the accuracy of lower limb alignment in TKA patients, providing a meaningful supplement to the application of AI in orthopaedics.
Collapse
Affiliation(s)
- Songlin Li
- Department of Orthopedic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100010, China
| | - Xingyu Liu
- School of Life Sciences, Tsinghua University, Beijing 100084, China
- Institute of Biomedical and Health Engineering (iBHE), Tsinghua Shenzhen International Graduate School, Shenzhen 518000, China
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Xi Chen
- Departments of Orthopedics, West China Hospital, West China School of Medicine, Sichuan University, Chengdu 610041, China
| | - Hongjun Xu
- Department of Orthopedic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100010, China
| | - Yiling Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Wenwei Qian
- Department of Orthopedic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100010, China
| |
Collapse
|
7
|
Wei S, Qiu R, Pu Y, Hu A, Niu Y, Wu Z, Zhang H, Li J. A semi-supervised learning-based quality evaluation system for digital chest radiographs. Med Phys 2023; 50:6789-6800. [PMID: 37543992 DOI: 10.1002/mp.16663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/03/2023] [Accepted: 07/20/2023] [Indexed: 08/08/2023] Open
Abstract
BACKGROUND Digital radiography is the most commonly utilized medical imaging technique worldwide, and the quality of radiographs plays a crucial role in accurate disease diagnosis. Therefore, evaluating the quality of radiographs is an essential step in medical examinations. However, manual evaluation can be time-consuming, labor-intensive, and prone to interobserver differences, making it less reliable. PURPOSE To alleviate the workload of radiographic technologists and enhance the efficiency of radiograph quality evaluation, it is crucial to develop rapid and reliable quality evaluation methods and establish a set of quantitative evaluation standards. To address this, we have proposed a quality evaluation system for digital radiographs that utilizes deep learning techniques to achieve fast and precise evaluation. METHODS The evaluation of frontal chest radiograph quality involves assessing patient positioning through semantic segmentation and foreign body detection. For lung, scapula, and clavicle segmentation in digital chest radiographs, a residual connection-based convolutional neural network π-ResUNet, was proposed. Criteria for patient positioning evaluation were established based on the segmentation and manual evaluation results. A convolutional neural network, FasterRCNN, was utilized to detect and localize foreign bodies in digital chest radiographs. To enhance the performance of both neural networks, a semi-supervised learning (SSL) strategy was implemented by incorporating a consistency loss that leverages a large number of unlabeled digital radiographs. We also trained the network using the fully supervised learning (FSL) strategy and compared their performance on the test set. The ChestXRay-14 and object-CXR datasets were used throughout the process. RESULTS By comparing with the manual annotation, the proposed network, trained using the SSL method, achieved a high Dice similarity coefficient (DSC) of 0.96, 0.88, and 0.88 for lung, scapula, and clavicle segmentation, respectively, outperforming the network trained with the FSL method. In addition, for foreign body detection, the proposed SSL method was superior to the FSL method, achieving an AUC (Area under receiver operating characteristic curve, Area under ROC curve) of 0.90 and an FROC (Free-response ROC) of 0.77 on the test dataset. CONCLUSIONS The experimental results show that our proposed system is well-suited for radiograph quality evaluation, with the semi-supervised learning method further improving the network's performance. The proposed method can evaluate the quality of a chest radiograph from two aspects-patient positioning and foreign body detection-within 1 s, offering a promising tool in radiograph quality evaluation.
Collapse
Affiliation(s)
- Shuoyang Wei
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
- Department of Radiotherapy, Peking Union Medical College Hospital, Beijing, China
| | - Rui Qiu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| | - Yanheng Pu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| | - Ankang Hu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| | - Yantao Niu
- Beijing Tongren Hospital, CMU, Beijing, China
| | - Zhen Wu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| | - Hui Zhang
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| | - Junli Li
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, China
| |
Collapse
|
8
|
Zanette B, Greer MLC, Moraes TJ, Ratjen F, Santyr G. The argument for utilising magnetic resonance imaging as a tool for monitoring lung structure and function in pediatric patients. Expert Rev Respir Med 2023; 17:527-538. [PMID: 37491192 DOI: 10.1080/17476348.2023.2241355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 07/06/2023] [Accepted: 07/24/2023] [Indexed: 07/27/2023]
Abstract
INTRODUCTION Although historically challenging to perform in the lung, technological advancements have made Magnetic Resonance Imaging (MRI) increasingly applicable for pediatric pulmonary imaging. Furthermore, a wide array of functional imaging techniques has become available that may be leveraged alongside structural imaging for increasingly sensitive biomarkers, or as outcome measures in the evaluation of novel therapies. AREAS COVERED In this review, recent technical advancements and modern methodologies for structural and functional lung MRI are described. These include ultrashort echo time (UTE) MRI, free-breathing contrast agent-free, functional lung MRI, and hyperpolarized gas MRI, amongst other techniques. Specific examples of the application of these methods in children are provided, principally drawn from recent research in asthma, bronchopulmonary dysplasia, and cystic fibrosis. EXPERT OPINION Pediatric lung MRI is rapidly growing, and is well poised for clinical utilization, as well as continued research into early disease detection, disease processes, and novel treatments. Structure/function complementarity makes MRI especially attractive as a tool for increased adoption in the evaluation of pediatric lung disease. Looking toward the future, novel technologies, such as low-field MRI and artificial intelligence, mitigate some of the traditional drawbacks of lung MRI and will aid in improving access to MRI in general, potentially spurring increased adoption and demand for pulmonary MRI in children.
Collapse
Affiliation(s)
- Brandon Zanette
- Translational Medicine Program, The Hospital for Sick Children, Toronto, ON, Canada
| | - Mary-Louise C Greer
- Department of Diagnostic Imaging, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Theo J Moraes
- Translational Medicine Program, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Pediatrics, Hospital for Sick Children, Toronto, ON, Canada
| | - Felix Ratjen
- Translational Medicine Program, The Hospital for Sick Children, Toronto, ON, Canada
- Division of Respiratory Medicine, The Hospital for Sick Children, Toronto, ON, Canada
| | - Giles Santyr
- Translational Medicine Program, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
9
|
Obrecht M, Zurbruegg S, Accart N, Lambert C, Doelemeyer A, Ledermann B, Beckmann N. Magnetic resonance imaging and ultrasound elastography in the context of preclinical pharmacological research: significance for the 3R principles. Front Pharmacol 2023; 14:1177421. [PMID: 37448960 PMCID: PMC10337591 DOI: 10.3389/fphar.2023.1177421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 06/16/2023] [Indexed: 07/18/2023] Open
Abstract
The 3Rs principles-reduction, refinement, replacement-are at the core of preclinical research within drug discovery, which still relies to a great extent on the availability of models of disease in animals. Minimizing their distress, reducing their number as well as searching for means to replace them in experimental studies are constant objectives in this area. Due to its non-invasive character in vivo imaging supports these efforts by enabling repeated longitudinal assessments in each animal which serves as its own control, thereby enabling to reduce considerably the animal utilization in the experiments. The repetitive monitoring of pathology progression and the effects of therapy becomes feasible by assessment of quantitative biomarkers. Moreover, imaging has translational prospects by facilitating the comparison of studies performed in small rodents and humans. Also, learnings from the clinic may be potentially back-translated to preclinical settings and therefore contribute to refining animal investigations. By concentrating on activities around the application of magnetic resonance imaging (MRI) and ultrasound elastography to small rodent models of disease, we aim to illustrate how in vivo imaging contributes primarily to reduction and refinement in the context of pharmacological research.
Collapse
Affiliation(s)
- Michael Obrecht
- Diseases of Aging and Regenerative Medicines, Novartis Institutes for BioMedical Research, Basel, Switzerland
| | - Stefan Zurbruegg
- Neurosciences Department, Novartis Institutes for BioMedical Research, Basel, Switzerland
| | - Nathalie Accart
- Diseases of Aging and Regenerative Medicines, Novartis Institutes for BioMedical Research, Basel, Switzerland
| | - Christian Lambert
- Diseases of Aging and Regenerative Medicines, Novartis Institutes for BioMedical Research, Basel, Switzerland
| | - Arno Doelemeyer
- Diseases of Aging and Regenerative Medicines, Novartis Institutes for BioMedical Research, Basel, Switzerland
| | - Birgit Ledermann
- 3Rs Leader, Novartis Institutes for BioMedical Research, Basel, Switzerland
| | - Nicolau Beckmann
- Diseases of Aging and Regenerative Medicines, Novartis Institutes for BioMedical Research, Basel, Switzerland
| |
Collapse
|
10
|
Miller LE, Bhattacharyya D, Miller VM, Bhattacharyya M. Recent Trend in Artificial Intelligence-Assisted Biomedical Publishing: A Quantitative Bibliometric Analysis. Cureus 2023; 15:e39224. [PMID: 37337487 PMCID: PMC10277011 DOI: 10.7759/cureus.39224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/19/2023] [Indexed: 06/21/2023] Open
Abstract
The rapid advancements in artificial intelligence (AI) technology in recent years have led to its integration into biomedical publishing. However, the extent to which AI has contributed to developing biomedical literature is unclear. This study aimed to identify trends in AI-generated content within peer-reviewed biomedical literature. We first tested the sensitivity and specificity of commercially available AI-detection software (Originality.AI, Collingwood, Ontario, Canada). Next, we conducted a MEDLINE (Medical Literature Analysis and Retrieval System Online) search to identify randomized controlled trials with available abstracts indexed between January 2020 and March 2023. We randomly selected 30 abstracts per quarter during this period and pasted the abstracts into the AI detection software to determine the probability of AI-generated content. The software yielded 100% sensitivity, 95% specificity, and excellent overall discriminatory ability with an area under the receiving operating curve of 97.6%. Among the 390 MEDLINE-indexed abstracts included in the analysis, the prevalence with a high probability (≥ 90%) of AI-generated text increased during the study period from 21.7% to 36.7% (p=0.01) based on a chi-square test for trend. The increasing prevalence of AI-generated text during the study period was also observed in various sensitivity analyses using AI probability thresholds ranging from 50% to 99% (all p≤0.01). The results of this study suggest that the prevalence of AI-assisted publishing in peer-reviewed journals has been increasing in recent years, even before the widespread adoption of ChatGPT (OpenAI, San Francisco, California, United States) and similar tools. The extent to which natural writing characteristics of the authors, utilization of common AI-powered applications, and introduction of AI elements during the post-acceptance publication phase influence AI detection scores warrants further study.
Collapse
|