1
|
Bernecker L, Mathiesen EB, Ingebrigtsen T, Isaksen J, Johnsen LH, Vangberg TR. Patch-Wise Deep Learning Method for Intracranial Stenosis and Aneurysm Detection-the Tromsø Study. Neuroinformatics 2025; 23:8. [PMID: 39812766 PMCID: PMC11735523 DOI: 10.1007/s12021-024-09697-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/02/2024] [Indexed: 01/16/2025]
Abstract
Intracranial atherosclerotic stenosis (ICAS) and intracranial aneurysms are prevalent conditions in the cerebrovascular system. ICAS causes a narrowing of the arterial lumen, thereby restricting blood flow, while aneurysms involve the ballooning of blood vessels. Both conditions can lead to severe outcomes, such as stroke or vessel rupture, which can be fatal. Early detection is crucial for effective intervention. In this study, we introduced a method that combines classical computer vision techniques with deep learning to detect intracranial aneurysms and ICAS in time-of-flight magnetic resonance angiography images. The process began with skull-stripping, followed by an affine transformation to align the images to a common atlas space. We then focused on the region of interest, including the circle of Willis, by cropping the relevant area. A segmentation algorithm was used to isolate the arteries, after which a patch-wise residual neural network was applied across the image. A voting mechanism was then employed to identify the presence of atrophies. Our method achieved accuracies of 76.5% for aneurysms and 82.4% for ICAS. Notably, when occlusions were not considered, the accuracy for ICAS detection improved to 85.7%. While the algorithm performed well for localized pathological findings, it was less effective at detecting occlusions, which involved long-range dependencies in the MRIs. This limitation was due to the architectural design of the patch-wise deep learning approach. Regardless, this can, in the future, be mitigated in a multi-scale patch-wise algorithm.
Collapse
Affiliation(s)
- Luca Bernecker
- Department of Clinical Medicine, UiT the Arctic University of Norway, Tromsø, Norway
- PET Imaging Center, University Hospital North Norway, Tromsø, Norway
| | - Ellisiv B Mathiesen
- Department of Clinical Medicine, UiT the Arctic University of Norway, Tromsø, Norway
- Department of Neurology, University Hospital North Norway, Tromsø, Norway
| | - Tor Ingebrigtsen
- Department of Clinical Medicine, UiT the Arctic University of Norway, Tromsø, Norway
- Department of Neurosurgery, Ophthalmology, and Otorhinolaryngology, University Hospital of North Norway, Tromsø, Norway
| | - Jørgen Isaksen
- Department of Clinical Medicine, UiT the Arctic University of Norway, Tromsø, Norway
- Department of Neurosurgery, Ophthalmology, and Otorhinolaryngology, University Hospital of North Norway, Tromsø, Norway
| | - Liv-Hege Johnsen
- Department of Radiology, University Hospital North Norway, Tromsø, Norway
| | - Torgil Riise Vangberg
- Department of Clinical Medicine, UiT the Arctic University of Norway, Tromsø, Norway.
- PET Imaging Center, University Hospital North Norway, Tromsø, Norway.
| |
Collapse
|
2
|
Jeon K, Park WY, Kahn CE, Nagy P, You SC, Yoon SH. Advancing Medical Imaging Research Through Standardization: The Path to Rapid Development, Rigorous Validation, and Robust Reproducibility. Invest Radiol 2025; 60:1-10. [PMID: 38985896 DOI: 10.1097/rli.0000000000001106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024]
Abstract
ABSTRACT Artificial intelligence (AI) has made significant advances in radiology. Nonetheless, challenges in AI development, validation, and reproducibility persist, primarily due to the lack of high-quality, large-scale, standardized data across the world. Addressing these challenges requires comprehensive standardization of medical imaging data and seamless integration with structured medical data.Developed by the Observational Health Data Sciences and Informatics community, the OMOP Common Data Model enables large-scale international collaborations with structured medical data. It ensures syntactic and semantic interoperability, while supporting the privacy-protected distribution of research across borders. The recently proposed Medical Imaging Common Data Model is designed to encompass all DICOM-formatted medical imaging data and integrate imaging-derived features with clinical data, ensuring their provenance.The harmonization of medical imaging data and its seamless integration with structured clinical data at a global scale will pave the way for advanced AI research in radiology. This standardization will enable federated learning, ensuring privacy-preserving collaboration across institutions and promoting equitable AI through the inclusion of diverse patient populations. Moreover, it will facilitate the development of foundation models trained on large-scale, multimodal datasets, serving as powerful starting points for specialized AI applications. Objective and transparent algorithm validation on a standardized data infrastructure will enhance reproducibility and interoperability of AI systems, driving innovation and reliability in clinical applications.
Collapse
Affiliation(s)
- Kyulee Jeon
- From the Department of Biomedical Systems Informatics, Yonsei University, Seoul, South Korea (K.J., S.C.Y.); Institution for Innovation in Digital Healthcare, Yonsei University, Seoul, South Korea (K.J., S.C.Y.); Biomedical Informatics and Data Science, Johns Hopkins University, Baltimore, MD (W.Y.P., P.N.); Department of Radiology, University of Pennsylvania, Philadelphia, PA (C.E.K.); and Department of Radiology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, South Korea (S.H.Y.)
| | | | | | | | | | | |
Collapse
|
3
|
Burns J, Chung Y, Rula EY, Duszak R, Rosenkrantz AB. Evolving Trainee Participation in Radiologists' Workload Using A National Medicare-Focused Analysis From 2008 to 2020. J Am Coll Radiol 2025; 22:98-107. [PMID: 39453332 DOI: 10.1016/j.jacr.2024.08.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/24/2024] [Accepted: 08/31/2024] [Indexed: 10/26/2024]
Abstract
PURPOSE Increasing volumes and productivity expectations, along with practice type consolidation, may be impacting trainees' roles in the work effort of radiologists involved in education. We assessed temporal shifts in trainee participation in radiologists' workload nationally. METHODS All US radiologists interpreting noninvasive diagnostic imaging for Medicare fee-for-service beneficiaries were identified from annual 5% Research Identifiable Files from 2008 to 2020 (n = 35,595). Teaching radiologists were defined as those billing services using Medicare's GC modifier, indicating trainee supervision. Billed work relative value units were used to determine the percentage of teaching radiologists' total workload with trainee participation. Mean trainee participation in workload was calculated for teaching radiologists overall and stratified by radiologist and practice characteristics determined using National Downloadable Files. RESULTS The percentage of radiologists involved in teaching increased from 13.6% (2008) to 20.4% (2020). Among teaching radiologists, mean total workload increased 7% from 2008 to 2019 and decreased in 2020 to 2% below 2008's level; mean teaching workload decreased 19% from 2008 to 2019 and decreased in 2020 to 31% below 2008's level. Mean trainee participation in teaching radiologists' total workload decreased from 35.3% (2008) to 26.3% (2019) and 24.5% (2020). Teaching radiologists showed decreased mean trainee participation when stratified by gender, experience, subspecialty, geography, practice type, and practice size. CONCLUSIONS The percentage of US radiologists involved in resident teaching has increased, likely reflecting academic practice expansion and academic-community practice consolidation. However, a declining percentage of teaching radiologists' total workload involves trainees; this dispersion effect could have implications for education quality.
Collapse
Affiliation(s)
- Judah Burns
- Albert Einstein College of Medicine, Department of Radiology, Montefiore Medical Center, Bronx, New York; Vice Chair of Radiology Education, Montefiore Medical Center; and Chair, Subcommittee on Methodology, Committee on Imaging Appropriateness, American College of Radiology.
| | - YoonKyung Chung
- Principal Research Scientist, Harvey L. Neiman Health Policy Institute, Reston, Virginia
| | - Elizabeth Y Rula
- Executive Director, Harvey L. Neiman Health Policy Institute, Reston, Virginia. https://twitter.com/ElizabethYRula
| | - Richard Duszak
- Chair, Department of Radiology, University of Mississippi Medical Center, Jackson, Mississippi; Chair, ACR Commission on Leadership and Practice Development
| | - Andrew B Rosenkrantz
- Section Head, Abdominal Imaging, and Director of Health Policy, Department of Radiology, NYU Grossman School of Medicine, New York, New York; American College of Radiology Board of Chancellors (Chair, Commission on Body Imaging); and Editor-in-Chief, American Journal of Roentgenology
| |
Collapse
|
4
|
Wang Z, Xu C, Zhou J, Wang Y, Xu Z, Hu F, Cai Y. Accuracy of breast ultrasound image analysis software in feature analysis: a comparative study with sonographers. Sci Rep 2024; 14:30724. [PMID: 39730439 DOI: 10.1038/s41598-024-79773-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Accepted: 11/12/2024] [Indexed: 12/29/2024] Open
Abstract
Breast ultrasound is recommended for early breast cancer detection in China, but the rapid increase in imaging data burdens sonographers. This study evaluated the agreement between artificial intelligence (AI) software and sonographers in analyzing breast nodule features. Breast ultrasound images from two hospitals in Shanghai were analyzed by both the software and the sonographers for features including echotexture, echo pattern, orientation, shape, margin, calcification, and posterior echo attenuation. Agreement between software and sonographers was compared using the proportion of agreement and Kappa, with analysis time also evaluated. A total of 493 images were analyzed. The proportion of agreement between software and sonographers in assessing features was 80.5% for echotexture, 84.4% for echo pattern, 93.7% for orientation, 85.8% for shape, 88.6% for margin, 80.5% for calcification, and 90.5% for posterior echo attenuation, highlighting software's high accuracy. Cohen's kappa for other features indicated moderate to substantial agreement (0.411-0.674), with calcification showing fair agreement (0.335). The software significantly reduced analysis time compared to sonographers (P < 0.001). The software showed high accuracy and time efficiency. AI software presents a viable solution for reducing sonographers' workload and enhance healthcare in underserved areas by automating feature analysis in breast ultrasound images.
Collapse
Affiliation(s)
- Zuxin Wang
- Department of Public Health, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, 1111 Xianxia Road, Shanghai, 200335, China
- School of Public Health, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chen Xu
- Department of Public Health, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, 1111 Xianxia Road, Shanghai, 200335, China
| | - Jun Zhou
- Project Department, Tend.AI Medical Technologies Co., Ltd, Shanghai, China
| | - Ying Wang
- Department of Public Health, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, 1111 Xianxia Road, Shanghai, 200335, China
| | - Zhongqing Xu
- Department of General Practice, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fan Hu
- Department of Public Health, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, 1111 Xianxia Road, Shanghai, 200335, China.
| | - Yong Cai
- Department of Public Health, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, 1111 Xianxia Road, Shanghai, 200335, China.
| |
Collapse
|
5
|
Lenfant M. The art and agony of AI in neuroradiology. J Neuroradiol 2024; 52:101237. [PMID: 39674494 DOI: 10.1016/j.neurad.2024.101237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2024] [Accepted: 12/10/2024] [Indexed: 12/16/2024]
Affiliation(s)
- Marc Lenfant
- University Hospital of Dijon, Department of Neuroradiology, 14 rue Paul Gaffarel, 21110 Dijon, France.
| |
Collapse
|
6
|
Arnold P, Pinto Dos Santos D, Bamberg F, Kotter E. FHIR - Overdue Standard for Radiology Data Warehouses. ROFO-FORTSCHR RONTG 2024. [PMID: 39642924 DOI: 10.1055/a-2462-2351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/09/2024]
Abstract
In radiology, technological progress has led to an enormous increase in data volumes. To effectively use these data during diagnostics or subsequent clinical evaluations, they have to be aggregated at a central location and be meaningfully retrievable in context. Radiology data warehouses undertake this task: they integrate diverse data sources, enable patient-specific and examination-specific evaluations, and thus offer numerous benefits in patient care, education, and clinical research.The international standard Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) is particularly suitable for the implementation of such a data warehouse. FHIR allows for easy and fast data access, supports modern web-based frontends, and offers high interoperability due to the integration of medical ontologies such as SNOMED-CT or RadLex. Furthermore, FHIR has a robust data security concept. Because of these properties, FHIR has been selected by the Medical Informatics Initiative (MII) as the data standard for the core data set and is intended to be promoted as an international standard in the European Health Data Space (EHDS).Implementing the FHIR standard in radiology data warehouses is therefore a logical and sensible step towards data-driven medicine. · A data warehouse is essential for data-driven medicine, clinical care, and research purposes.. · Data warehouses enable efficient integration of AI results and structured report templates.. · Fast Healthcare Interoperability Resources (FHIR) is a suitable standard for a data warehouse.. · FHIR provides an interoperable data standard, supported by proven web technologies.. · FHIR improves semantic consistency and facilitates secure data exchange.. · Arnold P, Pinto dos Santos D, Bamberg F et al. FHIR - Overdue Standard for Radiology Data Warehouses. Fortschr Röntgenstr 2024; DOI 10.1055/a-2462-2351.
Collapse
Affiliation(s)
- Philipp Arnold
- Department of Radiology, Medical Center - University of Freiburg Department of Radiology, Freiburg, Germany
| | - Daniel Pinto Dos Santos
- Institute of Diagnostic and Interventional Radiology, University Hospital Cologne Institute of Diagnostic and Interventional Radiology, Cologne, Germany
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz Department of Diagnostic and Interventional Radiology, Mainz, Germany
| | - Fabian Bamberg
- Department of Radiology, Medical Center - University of Freiburg Department of Radiology, Freiburg, Germany
| | - Elmar Kotter
- Department of Radiology, Medical Center - University of Freiburg Department of Radiology, Freiburg, Germany
| |
Collapse
|
7
|
Mazaheri P, Whitman GJ, DeSimone AK, Ros PR, Avey GD, Hadi M, Narula JP, Williamson CR, Vilanilam G, Yaghmai V. Balancing High Clinical Volumes and Non-RVU Generating Activities in Radiology, Part ll: Future Directions. Acad Radiol 2024:S1076-6332(24)00852-3. [PMID: 39643465 DOI: 10.1016/j.acra.2024.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Revised: 10/29/2024] [Accepted: 11/02/2024] [Indexed: 12/09/2024]
Abstract
The Radiology Research Alliance (RRA) of the Association of Academic Radiology (AAR) creates task forces to study emerging trends shaping the future of radiology. This article highlights the findings of the AAR-RRA Task Force on Balancing High Clinical Volumes and non-relative value unit (Non-RVU)-Generating Activities. The Task Force's mission was to evaluate and emphasize the value of non-RVU-generating activities that academic radiologists perform. The work of this Task Force is presented in two separate manuscripts: Part I outlines the current landscape, while this manuscript, Part II, explores future directions for academic radiology departments seeking a better balance between high clinical workloads and non-RVU-generating opportunities for their faculty.
Collapse
Affiliation(s)
- Parisa Mazaheri
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, Missouri, USA (P.M.).
| | - Gary J Whitman
- Department of Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA (G.J.W.)
| | - Ariadne K DeSimone
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA (A.K.D.)
| | - Pablo R Ros
- Department of Radiology, Stony Brook University, Stony Brook, New York, USA (P.R.R.)
| | - Gregory D Avey
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA (G.D.A.)
| | - Mohiuddin Hadi
- Department of Radiology, University of Louisville, Louisville, Kentucky, USA (M.H.)
| | - Jay P Narula
- Georgetown University School of Medicine, Washington, DC, USA (J.P.N.)
| | | | - George Vilanilam
- Department of Radiology, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA (G.V.)
| | - Vahid Yaghmai
- Department of Radiological Sciences, University of California Irvine, Irvine, California, USA (V.Y.)
| |
Collapse
|
8
|
M Yogendra P, Goh AGW, Yee SY, Jawan F, Koh KKN, Tan TSE, Woon TK, Yeap PM, Tan MO. Accuracy of radiologists and radiology residents in detection of paediatric appendicular fractures with and without artificial intelligence. BMJ Health Care Inform 2024; 31:e101091. [PMID: 39638562 PMCID: PMC11624698 DOI: 10.1136/bmjhci-2024-101091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 11/14/2024] [Indexed: 12/07/2024] Open
Abstract
OBJECTIVES We aim to evaluate the accuracy of radiologists and radiology residents in the detection of paediatric appendicular fractures with and without the help of a commercially available fracture detection artificial intelligence (AI) solution in the hopes of showing potential clinical benefits in a general hospital setting. METHODS This was a retrospective study involving three associate consultants (AC) and three senior residents (SR) in radiology, who acted as readers. One reader from each human group interpreted the radiographs with the aid of AI. Cases were categorised into concordant and discordant cases between each interpreting group. Discordant cases were further evaluated by three independent subspecialty radiology consultants to determine the final diagnosis. A total of 500 anonymised paediatric patient cases (aged 2-15 years) who presented to a tertiary general hospital with a Children's emergency were retrospectively collected. Main outcome measures include the presence of fracture, accuracy of readers with and without AI, and total time taken to interpret the radiographs. RESULTS The AI solution alone showed the highest accuracy (area under the receiver operating characteristic curve 0.97; AC: 95% CI -0.055 to 0.320, p=0; SR: 95% CI 0.244 to 0.598, p=0). The two readers aided with AI had higher area under curves compared with readers without AI support (AC: 95% CI -0.303 to 0.465, p=0; SR: 95% CI -0.154 to 0.331, p=0). These differences were statistically significant. CONCLUSION Our study demonstrates excellent results in the detection of paediatric appendicular fractures using a commercially available AI solution. There is potential for the AI solution to function autonomously.
Collapse
Affiliation(s)
- Praveen M Yogendra
- Department of Radiology, Sengkang General Hospital, Singapore, Singapore
| | | | - Sze Ying Yee
- Department of Radiology, Sengkang General Hospital, Singapore, Singapore
| | - Freda Jawan
- Department of Radiology, Sengkang General Hospital, Singapore, Singapore
| | | | - Timothy Shao Ern Tan
- Department of Diagnostic and Interventional Imaging, KK Women’s and Children’s Hospital, Singapore, Singapore
| | - Tian Kai Woon
- Department of Diagnostic and Interventional Imaging, KK Women’s and Children’s Hospital, Singapore, Singapore
| | - Phey Ming Yeap
- Department of Radiology, Sengkang General Hospital, Singapore, Singapore
| | - Min On Tan
- Department of Radiology, Sengkang General Hospital, Singapore, Singapore
| |
Collapse
|
9
|
Talib MA, Moufti MA, Nasir Q, Kabbani Y, Aljaghber D, Afadar Y. Transfer Learning-Based Classifier to Automate the Extraction of False X-Ray Images From Hospital's Database. Int Dent J 2024; 74:1471-1482. [PMID: 39232939 PMCID: PMC11551570 DOI: 10.1016/j.identj.2024.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 07/11/2024] [Accepted: 08/02/2024] [Indexed: 09/06/2024] Open
Abstract
BACKGROUND During preclinical training, dental students take radiographs of acrylic (plastic) blocks containing extracted patient teeth. With the digitisation of medical records, a central archiving system was created to store and retrieve all x-ray images, regardless of whether they were images of teeth on acrylic blocks, or those from patients. In the early stage of the digitisation process, and due to the immaturity of the data management system, numerous images were mixed up and stored in random locations within a unified archiving system, including patient record files. Filtering out and expunging the undesired training images is imperative as manual searching for such images is problematic. Hence the aim of this stidy was to differentiate intraoral images from artificial images on acrylic blocks. METHODS An artificial intelligence (AI) solution to automatically differentiate between intraoral radiographs taken of patients and those taken of acrylic blocks was utilised in this study. The concept of transfer learning was applied to a dataset provided by a Dental Hospital. RESULTS An accuracy score, F1 score, and a recall score of 98.8%, 99.2%, and 100%, respectively, were achieved using a VGG16 pre-trained model. These results were more sensitive compared to those obtained initally using a baseline model with 96.5%, 97.5%, and 98.9% accuracy score, F1 score, and a recall score respectively. CONCLUSIONS The proposed system using transfer learning was able to accurately identify "fake" radiographs images and distinguish them from the real intraoral images.
Collapse
Affiliation(s)
- Manar Abu Talib
- Department of Computer Engineering, College of Computing and Informatics, University of Sharjah, Sharjah, United Arab Emirates
| | - Mohammad Adel Moufti
- Department of Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates.
| | - Qassim Nasir
- Department of Computer Science, College of Computing and Informatics, University of Sharjah, Sharjah, United Arab Emirates
| | - Yousuf Kabbani
- Department of Computer Science, College of Computing and Informatics, University of Sharjah, Sharjah, United Arab Emirates
| | - Dana Aljaghber
- Department of Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Yaman Afadar
- Department of Computer Science, College of Computing and Informatics, University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
10
|
Fathi M, Eshraghi R, Behzad S, Tavasol A, Bahrami A, Tafazolimoghadam A, Bhatt V, Ghadimi D, Gholamrezanezhad A. Potential strength and weakness of artificial intelligence integration in emergency radiology: a review of diagnostic utilizations and applications in patient care optimization. Emerg Radiol 2024; 31:887-901. [PMID: 39190230 DOI: 10.1007/s10140-024-02278-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Accepted: 08/08/2024] [Indexed: 08/28/2024]
Abstract
Artificial intelligence (AI) and its recent increasing healthcare integration has created both new opportunities and challenges in the practice of radiology and medical imaging. Recent advancements in AI technology have allowed for more workplace efficiency, higher diagnostic accuracy, and overall improvements in patient care. Limitations of AI such as data imbalances, the unclear nature of AI algorithms, and the challenges in detecting certain diseases make it difficult for its widespread adoption. This review article presents cases involving the use of AI models to diagnose intracranial hemorrhage, spinal fractures, and rib fractures, while discussing how certain factors like, type, location, size, presence of artifacts, calcification, and post-surgical changes, affect AI model performance and accuracy. While the use of artificial intelligence has the potential to improve the practice of emergency radiology, it is important to address its limitations to maximize its advantages while ensuring the safety of patients overall.
Collapse
Affiliation(s)
- Mobina Fathi
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Sciences, Tehran, Iran
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Reza Eshraghi
- Student Research Committee, Kashan University of Medical Science, Kashan, Iran
| | | | - Arian Tavasol
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ashkan Bahrami
- Student Research Committee, Kashan University of Medical Science, Kashan, Iran
| | | | - Vivek Bhatt
- School of Medicine, University of California, Riverside, CA, USA
| | - Delaram Ghadimi
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ali Gholamrezanezhad
- Keck School of Medicine of University of Southern California, Los Angeles, CA, USA.
- Department of Radiology, Division of Emergency Radiology, Keck School of Medicine, Cedars Sinai Hospital, University of Southern California, 1500 San Pablo Street, Los Angeles, CA, 90033, USA.
| |
Collapse
|
11
|
Sato J, Sugimoto K, Suzuki Y, Wataya T, Kita K, Nishigaki D, Tomiyama M, Hiraoka Y, Hori M, Takeda T, Kido S, Tomiyama N. Annotation-free multi-organ anomaly detection in abdominal CT using free-text radiology reports: a multi-centre retrospective study. EBioMedicine 2024; 110:105463. [PMID: 39613675 PMCID: PMC11663761 DOI: 10.1016/j.ebiom.2024.105463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 11/05/2024] [Accepted: 11/05/2024] [Indexed: 12/01/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) systems designed to detect abnormalities in abdominal computed tomography (CT) could reduce radiologists' workload and improve diagnostic processes. However, development of such models has been hampered by the shortage of large expert-annotated datasets. Here, we used information from free-text radiology reports, rather than manual annotations, to develop a deep-learning-based pipeline for comprehensive detection of abdominal CT abnormalities. METHODS In this multicentre retrospective study, we developed a deep-learning-based pipeline to detect abnormalities in the liver, gallbladder, pancreas, spleen, and kidneys. Abdominal CT exams and related free-text reports obtained during routine clinical practice collected from three institutions were used for training and internal testing, while data collected from six institutions were used for external testing. A multi-organ segmentation model and an information extraction schema were used to extract specific organ images and disease information, CT images and radiology reports, respectively, which were used to train a multiple-instance learning model for anomaly detection. Its performance was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score against radiologists' ground-truth labels. FINDINGS We trained the model for each organ on images selected from 66,684 exams (39,255 patients) and tested it on 300 (295 patients) and 600 (596 patients) exams for internal and external validation, respectively. In the external test cohort, the overall AUC for detecting organ abnormalities was 0.886. Whereas models trained on human-annotated labels performed better with the same number of exams, those trained on larger datasets with labels auto-extracted via the information extraction schema significantly outperformed human-annotated label-derived models. INTERPRETATION Using disease information from routine clinical free-text radiology reports allows development of accurate anomaly detection models without requiring manual annotations. This approach is applicable to various anatomical sites and could streamline diagnostic processes. FUNDING Japan Science and Technology Agency.
Collapse
Affiliation(s)
- Junya Sato
- Department of Artificial Intelligence in Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan; Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Kento Sugimoto
- Department of Medical Informatics, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Yuki Suzuki
- Department of Artificial Intelligence in Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Tomohiro Wataya
- Department of Artificial Intelligence in Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan; Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Kosuke Kita
- Department of Artificial Intelligence in Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Daiki Nishigaki
- Department of Artificial Intelligence in Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan; Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Miyuki Tomiyama
- Department of Artificial Intelligence in Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan; Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Yu Hiraoka
- Department of Artificial Intelligence in Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan; Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Masatoshi Hori
- Department of Artificial Intelligence in Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Toshihiro Takeda
- Department of Medical Informatics, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Shoji Kido
- Department of Artificial Intelligence in Diagnostic Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Noriyuki Tomiyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
12
|
DeSimone AK, Lanser EM, Mazaheri P, Agarwal V, Ismail M, Alexandre Frigini L, Baruah D, Hadi M, Williamson C, Sneider MB, Norbash A, Whitman GJ. Balancing High Clinical Volumes and Non-RVU-generating Activities in Radiology, Part I: The Current Landscape. Acad Radiol 2024:S1076-6332(24)00867-5. [PMID: 39613582 DOI: 10.1016/j.acra.2024.11.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Revised: 11/06/2024] [Accepted: 11/10/2024] [Indexed: 12/01/2024]
Abstract
The Radiology Research Alliance (RRA) of the Association of Academic Radiology (AAR) convenes task forces to study trends that will shape the future of radiology. This article presents the findings of the AAR-RRA task force on balancing high clinical volumes and non-RVU-generating activities, which set out to analyze and underscore the full value of academic radiologists' contributions beyond RVU-generating clinical work. The Task Force's efforts are detailed in a two-part report. This first part describes the current landscape, while the second part focuses on future directions for academic radiology departments aiming to achieve a more optimal balance between high clinical volumes and non-RVU-generating activities.
Collapse
Affiliation(s)
- Ariadne K DeSimone
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA (A.K.D.).
| | - Erica M Lanser
- Department of Radiology, Medical College of Wisconsin, Milwaukee, Michigan, USA (E.M.L.)
| | - Parisa Mazaheri
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, Missouri, USA (P.M.)
| | - Vikas Agarwal
- Department of Radiology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA (V.A.)
| | - Mohammad Ismail
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA (M.I.)
| | - L Alexandre Frigini
- Department of Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA (L.A.F.)
| | - Dhiraj Baruah
- Department of Radiology, Medical University of South Carolina, Charleston, South Carolina, USA (D.B.)
| | - Mohiuddin Hadi
- Department of Radiology, University of Louisville School of Medicine, Louisville, Kentucky, USA (M.H.)
| | | | - Michael B Sneider
- Department of Radiology, University of Virginia, Charlottesville, Virginia, USA (M.B.S.)
| | - Alexander Norbash
- University of Missouri-Kansas City School of Medicine, Kansas City, Missouri, USA (A.N.)
| | - Gary J Whitman
- Department of Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA (G.J.W.)
| |
Collapse
|
13
|
Nada A, Sayed AA, Hamouda M, Tantawi M, Khan A, Alt A, Hassanein H, Sevim BC, Altes T, Gaballah A. External validation and performance analysis of a deep learning-based model for the detection of intracranial hemorrhage. Neuroradiol J 2024:19714009241303078. [PMID: 39601611 PMCID: PMC11603421 DOI: 10.1177/19714009241303078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2024] Open
Abstract
PURPOSE We aimed to investigate the external validation and performance of an FDA-approved deep learning model in labeling intracranial hemorrhage (ICH) cases on a real-world heterogeneous clinical dataset. Furthermore, we delved deeper into evaluating how patients' risk factors influenced the model's performance and gathered feedback on satisfaction from radiologists of varying ranks. METHODS This prospective IRB approved study included 5600 non-contrast CT scans of the head in various clinical settings, that is, emergency, inpatient, and outpatient units. The patients' risk factors were collected and tested for impacting the performance of DL model utilizing univariate and multivariate regression analyses. The performance of DL model was contrasted to the radiologists' interpretation to determine the presence or absence of ICH with subsequent classification into subcategories of ICH. Key metrics, including accuracy, sensitivity, specificity, positive predictive value, and negative predictive value, were calculated. Receiver operating characteristics curve, along with the area under the curve, were determined. Additionally, a questionnaire was conducted with radiologists of varying ranks to assess their experience with the model. RESULTS The model exhibited outstanding performance, achieving a high sensitivity of 89% and specificity of 96%. Additional performance metrics, including positive predictive value (82%), negative predictive value (97%), and overall accuracy (94%), underscore its robust capabilities. The area under the ROC curve further demonstrated the model's efficacy, reaching 0.954. Multivariate logistic regression revealed statistical significance for age, sex, history of trauma, operative intervention, HTN, and smoking. CONCLUSION Our study highlights the satisfactory performance of the DL model on a diverse real-world dataset, garnering positive feedback from radiology trainees.
Collapse
Affiliation(s)
- Ayman Nada
- Department of Radiology, University of Missouri, Columbia, MO, USA
| | - Alaa A. Sayed
- Department of Medicine, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Mourad Hamouda
- Department of Radiology, St Vincent Hospital, Worcester, MA, USA
| | - Mohamed Tantawi
- Department of Radiology, University of Texas Medical Branch, Galveston, TX, USA
| | - Amna Khan
- Department of Medicine, Nazareth Hospital, Philadelphia, PA, USA
| | - Addison Alt
- Kansas City University, Kansas City, MO, USA
| | - Heidi Hassanein
- Northwell Health, Staten Island University Hospital, Staten Island, NY, USA
| | - Burak C. Sevim
- Radiology Department, Ssmhealth Saint Louis University Hospital, St Louis, MO, USA
| | - Talissa Altes
- Department of Radiology, University of Missouri, Columbia, MO, USA
| | - Ayman Gaballah
- Radiology Department, MD Anderson Cancer Center, The University of Texas, Houston, TX, USA
| |
Collapse
|
14
|
Xue J, Zheng H, Lai R, Zhou Z, Zhou J, Chen L, Wang M. Comprehensive Management of Intracranial Aneurysms Using Artificial Intelligence: An Overview. World Neurosurg 2024; 193:209-221. [PMID: 39521404 DOI: 10.1016/j.wneu.2024.10.108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 10/25/2024] [Indexed: 11/16/2024]
Abstract
Intracranial aneurysms (IAs), an asymptomatic vascular lesion, are becoming increasingly common as imaging technology progresses. Subarachnoid hemorrhage from IAs rupture entails a substantial risk of mortality or severe disability. The early detection and prompt intervention of IAs posing a high risk of rupture are paramount for optimizing clinical management and safeguarding patients' lives. Artificial intelligence (AI), with its exceptional capabilities in image-based tasks, has garnered significant scholarly interest worldwide. Its application in the management of IAs holds promise for advancing medical research and patient care. Utilizing deep learning algorithms, AI exhibits remarkable capabilities in precisely identifying and segmenting aneurysms, significantly enhancing diagnostic sensitivity and accuracy. Furthermore, AI can meticulously analyze extensive aneurysm datasets to forecast aneurysm growth, rupture hazards, and prognostic scenarios, offering clinician's invaluable assistance in decision-making. This article comprehensively examines the latest advancements in the utilization of AI in aneurysm treatment, encompassing detection and segmentation, rupture risk assessment, prediction of therapeutic outcomes, and facilitation of microcatheter shaping. A brief discussion is held on the challenges and future paths for clinical AI deployments.
Collapse
Affiliation(s)
- Jihao Xue
- Department of Neurosurgery, The Affiliated Hospital, Southwest Medical University, Luzhou, China
| | - Haowen Zheng
- Department of Neurosurgery, The Affiliated Hospital, Southwest Medical University, Luzhou, China
| | - Rui Lai
- Department of Neurosurgery, The Affiliated Hospital, Southwest Medical University, Luzhou, China
| | - Zhengjun Zhou
- Department of Neurosurgery, The Affiliated Hospital, Southwest Medical University, Luzhou, China
| | - Jie Zhou
- Department of Neurosurgery, The Affiliated Hospital, Southwest Medical University, Luzhou, China
| | - Ligang Chen
- Department of Neurosurgery, The Affiliated Hospital, Southwest Medical University, Luzhou, China
| | - Ming Wang
- Department of Neurosurgery, The Affiliated Hospital, Southwest Medical University, Luzhou, China.
| |
Collapse
|
15
|
Qin JJ, Gok M, Gholipour A, LoPilato J, Kirkby M, Poole C, Smith P, Grover R, Grieve SM. Four-Dimensional Flow MRI for Cardiovascular Evaluation (4DCarE): A Prospective Non-Inferiority Study of a Rapid Cardiac MRI Exam: Study Protocol and Pilot Analysis. Diagnostics (Basel) 2024; 14:2590. [PMID: 39594256 PMCID: PMC11593203 DOI: 10.3390/diagnostics14222590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Revised: 11/13/2024] [Accepted: 11/14/2024] [Indexed: 11/28/2024] Open
Abstract
BACKGROUND Accurate measurements of flow and ventricular volume and function are critical for clinical decision-making in cardiovascular medicine. Cardiac magnetic resonance (CMR) is the current gold standard for ventricular functional evaluation but is relatively expensive and time-consuming, thus limiting the scale of clinical applications. New volumetric acquisition techniques, such as four-dimensional flow (4D-flow) and three-dimensional volumetric cine (3D-cine) MRI, could potentially reduce acquisition time without loss in accuracy; however, this has not been formally tested on a large scale. METHODS 4DCarE (4D-flow MRI for cardiovascular evaluation) is a prospective, multi-centre study designed to test the non-inferiority of a compressed 20 min exam based on volumetric CMR compared with a conventional CMR exam (45-60 min). The compressed exam utilises 4D-flow together with a single breath-hold 3D-cine to provide a rapid, accurate quantitative assessment of the whole heart function. Outcome measures are (i) flow and chamber volume measurements and (ii) overall functional evaluation. Secondary analyses will explore clinical applications of 4D-flow-derived parameters, including wall shear stress, flow kinetic energy quantification, and vortex analysis in large-scale cohorts. A target of 1200 participants will enter the study across three sites. The analysis will be performed at a single core laboratory site. Pilot Results: We present a pilot analysis of 196 participants comparing flow measurements obtained by 4D-flow and conventional 2D phase contrast, which demonstrated moderate-good consistency in ascending aorta and main pulmonary artery flow measurements between the two techniques. Four-dimensional flow underestimated the flow compared with 2D-PC, by approximately 3 mL/beat in both vessels. CONCLUSIONS We present the study protocol of a prospective non-inferiority study of a rapid cardiac MRI exam compared with conventional CMR. The pilot analysis supports the continuation of the study. STUDY REGISTRATION This study is registered with the Australia and New Zealand Clinical Trials Registry (Registry number ACTRN12622000047796, Universal Trial Number: U1111-1270-6509, registered 17 January 2022-Retrospectively registered).
Collapse
Affiliation(s)
- Jiaxing Jason Qin
- Imaging and Phenotyping Laboratory, Charles Perkins Centre, University of Sydney, Sydney, NSW 2006, Australia; (J.J.Q.); (M.G.)
- School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, NSW 2006, Australia
| | - Mustafa Gok
- Imaging and Phenotyping Laboratory, Charles Perkins Centre, University of Sydney, Sydney, NSW 2006, Australia; (J.J.Q.); (M.G.)
- School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, NSW 2006, Australia
- Department of Radiology, Faculty of Medicine, Aydin Adnan Menderes University, Aydin 09010, Turkey
| | - Alireza Gholipour
- Imaging and Phenotyping Laboratory, Charles Perkins Centre, University of Sydney, Sydney, NSW 2006, Australia; (J.J.Q.); (M.G.)
- School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, NSW 2006, Australia
| | - Jordan LoPilato
- ANU Medical School, Australian National University, Canberra, ACT 2601, Australia
| | - Max Kirkby
- Imaging and Phenotyping Laboratory, Charles Perkins Centre, University of Sydney, Sydney, NSW 2006, Australia; (J.J.Q.); (M.G.)
- School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, NSW 2006, Australia
| | - Christopher Poole
- Imaging and Phenotyping Laboratory, Charles Perkins Centre, University of Sydney, Sydney, NSW 2006, Australia; (J.J.Q.); (M.G.)
- iCoreLab, North Sydney, NSW 2060, Australia
| | - Paul Smith
- Epworth Radiology, Waurn Ponds, VIC 3216, Australia
| | - Rominder Grover
- Macquarie University Hospital, Macquarie Park, NSW 2109, Australia
| | - Stuart M. Grieve
- Imaging and Phenotyping Laboratory, Charles Perkins Centre, University of Sydney, Sydney, NSW 2006, Australia; (J.J.Q.); (M.G.)
- School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, NSW 2006, Australia
- Lumus Imaging, St George Private Hospital, Kogarah, NSW 2217, Australia
| |
Collapse
|
16
|
Gupta P, Hsu YC, Liang LL, Chu YC, Chu CS, Wu JL, Chen JA, Tseng WH, Yang YC, Lee TY, Hung CL, Wu CY. Automatic localization and deep convolutional generative adversarial network-based classification of focal liver lesions in computed tomography images: A preliminary study. J Gastroenterol Hepatol 2024. [PMID: 39542428 DOI: 10.1111/jgh.16803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 10/02/2024] [Accepted: 10/24/2024] [Indexed: 11/17/2024]
Abstract
BACKGROUND AND AIM Computed tomography of the abdomen exhibits subtle and complex features of liver lesions, subjectively interpreted by physicians. We developed a deep learning-based localization and classification (DLLC) system for focal liver lesions (FLLs) in computed tomography imaging that could assist physicians in more robust clinical decision-making. METHODS We conducted a retrospective study (approval no. EMRP-109-058) on 1589 patients with 17 335 slices with 3195 FLLs using data from January 2004 to December 2020. The training set included 1272 patients (male: 776, mean age 62 ± 10.9), and the test set included 317 patients (male: 228, mean age 57 ± 11.8). The slices were annotated by annotators with different experience levels, and the DLLC system was developed using generative adversarial networks for data augmentation. A comparative analysis was performed for the DLLC system versus physicians using external data. RESULTS Our DLLC system demonstrated mean average precision at 0.81 for localization. The system's overall accuracy for multiclass classifications was 0.97 (95% confidence interval [CI]: 0.95-0.99). Considering FLLs ≤ 3 cm, the system achieved an accuracy of 0.83 (95% CI: 0.68-0.98), and for size > 3 cm, the accuracy was 0.87 (95% CI: 0.77-0.97) for localization. Furthermore, during classification, the accuracy was 0.95 (95% CI: 0.92-0.98) for FLLs ≤ 3 cm and 0.97 (95% CI: 0.94-1.00) for FLLs > 3 cm. CONCLUSION This system can provide an accurate and non-invasive method for diagnosing liver conditions, making it a valuable tool for hepatologists and radiologists.
Collapse
Affiliation(s)
- Pushpanjali Gupta
- Division of Translational Research, Taipei Veterans General Hospital, Taipei, Taiwan
- Health Innovation Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Institute of Public Health, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yao-Chun Hsu
- Division of Gastroenterology and Hepatology, E-DA Hospital, Kaohsiung, Taiwan
- School of Medicine, I-Shou University, Kaohsiung, Taiwan
| | - Li-Lin Liang
- Health Innovation Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Institute of Public Health, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yuan-Chia Chu
- Information Management Office, Taipei Veterans General Hospital, Taipei, Taiwan
- Big Data Center, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Information Management, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
| | - Chia-Sheng Chu
- Ph.D. Program of Interdisciplinary Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Division of Gastroenterology and Hepatology, Taipei City Hospital Yang Ming Branch, Taipei, Taiwan
| | - Jaw-Liang Wu
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jian-An Chen
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Hsiu Tseng
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ya-Ching Yang
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Teng-Yu Lee
- Division of Gastroenterology and Hepatology, Taichung Veterans General Hospital, Taichung, Taiwan
- School of Medicine, Chung Shan Medical University, Taichung, Taiwan
| | - Che-Lun Hung
- Health Innovation Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chun-Ying Wu
- Division of Translational Research, Taipei Veterans General Hospital, Taipei, Taiwan
- Health Innovation Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Institute of Public Health, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Ph.D. Program of Interdisciplinary Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Public Health, China Medical University, Taichung, Taiwan
| |
Collapse
|
17
|
Amasha AAH, Kasalak Ö, Glaudemans AWJM, Noordzij W, Dierckx RAJO, Koopmans KP, Kwee TC. Increased individual workload for nuclear medicine physicians over the past years: 2008-2023 data from The Netherlands. Ann Nucl Med 2024:10.1007/s12149-024-01995-5. [PMID: 39522080 DOI: 10.1007/s12149-024-01995-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Accepted: 10/25/2024] [Indexed: 11/16/2024]
Abstract
OBJECIVE To investigate temporal trends in the individual workload of nuclear medicine physicians at a large tertiary care academic center between 2008 and 2023. METHODS This study analyzed the reporting workload of nuclear medicine physicians in a large tertiary care academic center in The Netherlands on 36 unique (randomly sampled) calendar days, for each year between 2008 and 2023. The average daily departmental workload (measured with relative value units) was calculated for each year between 2008 and 2023. The individual workload was calculated by dividing the average daily departmental workload in each year by the available full-time equivalent nuclear medicine physicians in each year. Mann-Kendall tests were used to assess for any temporal monotonic trends in individual workload and types of nuclear medicine procedures performed. RESULTS Individual workload increased significantly between 2008 and 2023 (Mann-Kendall tau of 0.611, P = 0.001). Individual workload in 2023 was 86% higher than in 2008. The use of positron emission tomography (PET) increased significantly (Mann-Kendall tau of 0.912, P < 0.001) between 2008 and 2023. The use of diagnostic scintigraphy decreased significantly in the same period (Mann-Kendall tau of -0.817, P < 0.001). The use of DEXA also showed a significant decrease (Mann-Kendall tau of -0.467, P = 0.013), but this decrease was negligible on a relative scale. The number of therapeutic procedures (Mann-Kendall tau of -0.100, P = 0.626) remained statistically stable in this period. CONCLUSIONS Our single-center study showed that the individual workload of nuclear medicine physicians has increased significantly between 2008 and 2023, driven by the rise in PET scans. The demand for both diagnostic and therapeutic nuclear medicine procedures and associated workload is expected to keep on increasing in the foreseeable future. This workload trend should be taken into account by policymakers involved in nuclear medicine staffing planning. A healthy balance between the nuclear medicine workforce and workload is necessary to maintain the quality of care, to be able to perform other important (academic) tasks such as research, educating and training medical students and residents, and management, and to prevent physician burnout and dropout.
Collapse
Affiliation(s)
- Asaad A H Amasha
- Departments of Radiology and Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, P.O. Box 30.001, 9700 RB, Groningen, The Netherlands
| | - Ömer Kasalak
- Departments of Radiology and Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, P.O. Box 30.001, 9700 RB, Groningen, The Netherlands
| | - Andor W J M Glaudemans
- Departments of Radiology and Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, P.O. Box 30.001, 9700 RB, Groningen, The Netherlands
| | - Walter Noordzij
- Departments of Radiology and Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, P.O. Box 30.001, 9700 RB, Groningen, The Netherlands
| | - Rudi A J O Dierckx
- Departments of Radiology and Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, P.O. Box 30.001, 9700 RB, Groningen, The Netherlands
| | - Klaas-Pieter Koopmans
- Departments of Radiology and Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, P.O. Box 30.001, 9700 RB, Groningen, The Netherlands
| | - Thomas C Kwee
- Departments of Radiology and Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, P.O. Box 30.001, 9700 RB, Groningen, The Netherlands.
| |
Collapse
|
18
|
Suzuki H, Kokabu T, Yamada K, Ishikawa Y, Yabu A, Yanagihashi Y, Hyakumachi T, Tachi H, Shimizu T, Endo T, Ohnishi T, Ukeba D, Nagahama K, Takahata M, Sudo H, Iwasaki N. Deep learning-based detection of lumbar spinal canal stenosis using convolutional neural networks. Spine J 2024; 24:2086-2101. [PMID: 38909909 DOI: 10.1016/j.spinee.2024.06.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 06/25/2024]
Abstract
BACKGROUND CONTEXT Lumbar spinal canal stenosis (LSCS) is the most common spinal degenerative disorder in elderly people and usually first seen by primary care physicians or orthopedic surgeons who are not spine surgery specialists. Magnetic resonance imaging (MRI) is useful in the diagnosis of LSCS, but the equipment is often not available or difficult to read. LSCS patients with progressive neurologic deficits have difficulty with recovery if surgical treatment is delayed. So, early diagnosis and determination of appropriate surgical indications are crucial in the treatment of LSCS. Convolutional neural networks (CNNs), a type of deep learning, offers significant advantages for image recognition and classification, and work well with radiographs, which can be easily taken at any facility. PURPOSE Our purpose was to develop an algorithm to diagnose the presence or absence of LSCS requiring surgery from plain radiographs using CNNs. STUDY DESIGN Retrospective analysis of consecutive, nonrandomized series of patients at a single institution. PATIENT SAMPLE Data of 150 patients who underwent surgery for LSCS, including degenerative spondylolisthesis, at a single institution from January 2022 to August 2022, were collected. Additionally, 25 patients who underwent surgery at 2 other hospitals were included for extra external validation. OUTCOME MEASURES In annotation 1, the area under the curve (AUC) computed from the receiver operating characteristic (ROC) curve, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, positive likelihood ratio (PLR), and negative likelihood ratio (NLR) were calculated. In annotation 2, correlation coefficients were used. METHODS Four intervertebral levels from L1/2 to L4/5 were extracted as region of interest from lateral plain lumbar spine radiographs totaling 600 images were obtained. Based on the date of surgery, 500 images derived from the first 125 cases were used for internal validation, and 100 images from the subsequent 25 cases used for external validation. Additionally, 100 images from other hospitals were used for extra external validation. In annotation 1, binary classification of operative and nonoperative levels was used, and in annotation 2, the spinal canal area measured on axial MRI was labeled as the output layer. For internal validation, the 500 images were divided into each 5 dataset on per-patient basis and 5-fold cross-validation was performed. Five trained models were registered in the external validation prediction performance. Grad-CAM was used to visualize area with the high features extracted by CNNs. RESULTS In internal validation, the AUC and accuracy for annotation 1 ranged between 0.85-0.89 and 79-83%, respectively, and the correlation coefficients for annotation 2 ranged between 0.53 and 0.64 (all p<.01). In external validation, the AUC and accuracy for annotation 1 were 0.90 and 82%, respectively, and the correlation coefficient for annotation 2 was 0.69, using 5 trained CNN models. In the extra external validation, the AUC and accuracy for annotation 1 were 0.89 and 84%, respectively, and the correlation coefficient for annotation 2 was 0.56. Grad-CAM showed high feature density in the intervertebral joints and posterior intervertebral discs. CONCLUSIONS This technology automatically detects LSCS from plain lumbar spine radiographs, making it possible for medical facilities without MRI or nonspecialists to diagnose LSCS, suggesting the possibility of eliminating delays in the diagnosis and treatment of LSCS that require early treatment.
Collapse
Affiliation(s)
- Hisataka Suzuki
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan; Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Terufumi Kokabu
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan; Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Katsuhisa Yamada
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan.
| | - Yoko Ishikawa
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Akito Yabu
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Yasushi Yanagihashi
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Takahiko Hyakumachi
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Hiroyuki Tachi
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Tomohiro Shimizu
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Tsutomu Endo
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Takashi Ohnishi
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Daisuke Ukeba
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Ken Nagahama
- Department of Orthopaedic Surgery, Sapporo Endoscopic Spine Surgery, N16E16, Sapporo, Hokkaido 065-0016, Japan
| | - Masahiko Takahata
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Hideki Sudo
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Norimasa Iwasaki
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| |
Collapse
|
19
|
Zhao T, Wang B, Liang W, Cheng S, Wang B, Cui M, Shou J. Accuracy of 18F-FDG PET Imaging in Differentiating Parkinson's Disease from Atypical Parkinsonian Syndromes: A Systematic Review and Meta-Analysis. Acad Radiol 2024; 31:4575-4594. [PMID: 39183130 DOI: 10.1016/j.acra.2024.08.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Revised: 07/26/2024] [Accepted: 08/09/2024] [Indexed: 08/27/2024]
Abstract
RATIONALE AND OBJECTIVE To quantitatively assess the accuracy of 18F-FDG PET in differentiating Parkinson's Disease (PD) from Atypical Parkinsonian Syndromes (APSs). METHODS PubMed, Embase, and Web of Science databases were searched to identify studies published from the inception of the databases up to June 2024 that used 18F-FDG PET imaging for the differential diagnosis of PD and APSs. The risk of bias in the included studies was assessed using the QUADAS-2 or QUADAS-AI tool. Bivariate random-effects models were used to calculate the pooled sensitivity, specificity, and the area under the curves (AUC) of summary receiver operating characteristic (SROC). RESULTS 24 studies met the inclusion criteria, involving a total of 1508 PD patients and 1370 APSs patients. 12 studies relied on visual interpretation by radiologists, of which the pooled sensitivity, specificity, and SROC-AUC for direct visual interpretation in diagnosing PD were 96% (95%CI: 91%, 98%), 90% (95%CI: 83%, 95%), and 0.98 (95%CI: 0.96, 0.99), respectively; the pooled sensitivity, specificity, and SROC-AUC for visual interpretation supported by univariate algorithms in diagnosing PD were 93% (95%CI: 90%, 95%), 90% (95%CI: 85%, 94%), and 0.96 (95%CI: 0.94, 0.97), respectively. 12 studies relied on artificial intelligence (AI) to analyze 18F-FDG PET imaging data. The pooled sensitivity, specificity, and SROC-AUC of machine learning (ML) for diagnosing PD were 87% (95%CI: 82%, 91%), 91% (95%CI: 86%, 94%), and 0.95 (95%CI: 0.93, 0.96), respectively. The pooled sensitivity, specificity, and SROC-AUC of deep learning (DL) for diagnosing PD were 97% (95%CI: 95%, 98%), 95% (95%CI: 89%, 98%), and 0.98 (95%CI: 0.96, 0.99), respectively. CONCLUSION 18F-FDG PET has a high accuracy in differentiating PD from APS, among which AI-assisted automatic classification performs well, with a diagnostic accuracy comparable to that of radiologists, and is expected to become an important auxiliary means of clinical diagnosis in the future.
Collapse
Affiliation(s)
- Tailiang Zhao
- Department of Neurosurgery, The Fifth Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, Henan, China
| | - Bingbing Wang
- Department of Neurosurgery, The Fifth Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, Henan, China
| | - Wei Liang
- Department of Neurosurgery, The Fifth Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, Henan, China
| | - Sen Cheng
- Department of Neurosurgery, The Fifth Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, Henan, China
| | - Bin Wang
- Department of Cardiology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing 100000, China
| | - Ming Cui
- Department of Neurology, The Fifth Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, Henan, China
| | - Jixin Shou
- Department of Neurosurgery, The Fifth Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, Henan, China.
| |
Collapse
|
20
|
D'Angelo T, Bucolo GM, Kamareddine T, Yel I, Koch V, Gruenewald LD, Martin S, Alizadeh LS, Mazziotti S, Blandino A, Vogl TJ, Booz C. Accuracy and time efficiency of a novel deep learning algorithm for Intracranial Hemorrhage detection in CT Scans. LA RADIOLOGIA MEDICA 2024; 129:1499-1506. [PMID: 39123064 PMCID: PMC11480174 DOI: 10.1007/s11547-024-01867-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 08/01/2024] [Indexed: 08/12/2024]
Abstract
PURPOSE To evaluate a deep learning-based pipeline using a Dense-UNet architecture for the assessment of acute intracranial hemorrhage (ICH) on non-contrast computed tomography (NCCT) head scans after traumatic brain injury (TBI). MATERIALS AND METHODS This retrospective study was conducted using a prototype algorithm that evaluated 502 NCCT head scans with ICH in context of TBI. Four board-certified radiologists evaluated in consensus the CT scans to establish the standard of reference for hemorrhage presence and type of ICH. Consequently, all CT scans were independently analyzed by the algorithm and a board-certified radiologist to assess the presence and type of ICH. Additionally, the time to diagnosis was measured for both methods. RESULTS A total of 405/502 patients presented ICH classified in the following types: intraparenchymal (n = 172); intraventricular (n = 26); subarachnoid (n = 163); subdural (n = 178); and epidural (n = 15). The algorithm showed high diagnostic accuracy (91.24%) for the assessment of ICH with a sensitivity of 90.37% and specificity of 94.85%. To distinguish the different ICH types, the algorithm had a sensitivity of 93.47% and a specificity of 99.79%, with an accuracy of 98.54%. To detect midline shift, the algorithm had a sensitivity of 100%. In terms of processing time, the algorithm was significantly faster compared to the radiologist's time to first diagnosis (15.37 ± 1.85 vs 277 ± 14 s, p < 0.001). CONCLUSION A novel deep learning algorithm can provide high diagnostic accuracy for the identification and classification of ICH from unenhanced CT scans, combined with short processing times. This has the potential to assist and improve radiologists' ICH assessment in NCCT scans, especially in emergency scenarios, when time efficiency is needed.
Collapse
Affiliation(s)
- Tommaso D'Angelo
- Diagnostic and Interventional Radiology Unit, BIOMORF Department, University of Messina, Messina, Italy.
- Department of Radiology and Nuclear Medicine, Erasmus MC, 3015 GD, Rotterdam, The Netherlands.
| | - Giuseppe M Bucolo
- Diagnostic and Interventional Radiology Unit, BIOMORF Department, University of Messina, Messina, Italy
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
| | - Tarek Kamareddine
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
| | - Ibrahim Yel
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
| | - Vitali Koch
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
| | - Leon D Gruenewald
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
| | - Simon Martin
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
| | - Leona S Alizadeh
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
- Department of Diagnostic and Interventional Radiology and Neuroradiology, Bundeswehr Central Hospital Koblenz, Koblenz, Germany
| | - Silvio Mazziotti
- Diagnostic and Interventional Radiology Unit, BIOMORF Department, University of Messina, Messina, Italy
| | - Alfredo Blandino
- Diagnostic and Interventional Radiology Unit, BIOMORF Department, University of Messina, Messina, Italy
| | - Thomas J Vogl
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
| | - Christian Booz
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt Am Main, Germany
| |
Collapse
|
21
|
Thakore NL, Lan M, Winkel AF, Vieira DL, Kang SK. Best Practices: Burnout Is More Than Binary. AJR Am J Roentgenol 2024; 223:e2431111. [PMID: 39016454 DOI: 10.2214/ajr.24.31111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/18/2024]
Abstract
Burnout among radiologists is increasingly prevalent, with the potential for having a substantial negative impact on physician well-being, delivery of care, and health outcomes. To evaluate this phenomenon using reliable and accurate means, validated quantitative instruments are essential. Variation in measurement can contribute to wide-ranging findings. This article evaluates radiologist burnout rates globally and dimensions of burnout as reported using different validated instruments; it also provides guidance on best practices to characterize burnout. Fifty-seven studies published between 1990 and 2023 were included in a systematic review, and 43 studies were included in a meta-analysis of burnout prevalence using random-effects models. The reported burnout prevalence ranged from 5% to 85%. With the Maslach Burnout Inventory (MBI), burnout prevalence varied significantly depending on the instrument version used. Among MBI subcategories, the pooled prevalence of emotional exhaustion was 54% (95% CI, 45-63%), depersonalization was 52% (95% CI, 41-63%), and low personal accomplishment was 36% (95% CI, 27-47%). Other validated burnout instruments showed less heterogeneous results; studies using the Stanford Professional Fulfillment Index yielded a burnout prevalence of 39% (95% CI, 34-45%), whereas the validated single-item instrument yielded a burnout prevalence of 34% (95% CI, 29-39%). Standardized instruments for assessing prevalence alongside multidimensional profiles capturing experiences may better characterize radiologist burnout, including change occurring over time.
Collapse
Affiliation(s)
| | - Michael Lan
- NYU Grossman School of Medicine, New York, NY
| | | | - Dorice L Vieira
- Health Sciences Library, NYU Grossman School of Medicine, New York, NY
| | - Stella K Kang
- Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health, 550 First Ave, New York, NY 10016
- Department of Population Health, NYU Grossman School of Medicine, NYU Langone Health, New York, NY
| |
Collapse
|
22
|
Vanderbecq Q, Gelard M, Pesquet JC, Wagner M, Arrive L, Zins M, Chouzenoux E. Deep learning for automatic bowel-obstruction identification on abdominal CT. Eur Radiol 2024; 34:5842-5853. [PMID: 38388719 DOI: 10.1007/s00330-024-10657-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 01/03/2024] [Accepted: 01/23/2024] [Indexed: 02/24/2024]
Abstract
RATIONALE AND OBJECTIVES Automated evaluation of abdominal computed tomography (CT) scans should help radiologists manage their massive workloads, thereby leading to earlier diagnoses and better patient outcomes. Our objective was to develop a machine-learning model capable of reliably identifying suspected bowel obstruction (BO) on abdominal CT. MATERIALS AND METHODS The internal dataset comprised 1345 abdominal CTs obtained in 2015-2022 from 1273 patients with suspected BO; among them, 670 were annotated as BO yes/no by an experienced abdominal radiologist. The external dataset consisted of 88 radiologist-annotated CTs. We developed a full preprocessing pipeline for abdominal CT comprising a model to locate the abdominal-pelvic region and another model to crop the 3D scan around the body. We built, trained, and tested several neural-network architectures for the binary classification (BO, yes/no) of each CT. F1 and balanced accuracy scores were computed to assess model performance. RESULTS The mixed convolutional network pretrained on a Kinetics 400 dataset achieved the best results: with the internal dataset, the F1 score was 0.92, balanced accuracy 0.86, and sensitivity 0.93; with the external dataset, the corresponding values were 0.89, 0.89, and 0.89. When calibrated on sensitivity, this model produced 1.00 sensitivity, 0.84 specificity, and an F1 score of 0.88 with the internal dataset; corresponding values were 0.98, 0.76, and 0.87 with the external dataset. CONCLUSION The 3D mixed convolutional neural network developed here shows great potential for the automated binary classification (BO yes/no) of abdominal CT scans from patients with suspected BO. CLINICAL RELEVANCE STATEMENT The 3D mixed CNN automates bowel obstruction classification, potentially automating patient selection and CT prioritization, leading to an enhanced radiologist workflow. KEY POINTS • Bowel obstruction's rising incidence strains radiologists. AI can aid urgent CT readings. • Employed 1345 CT scans, neural networks for bowel obstruction detection, achieving high accuracy and sensitivity on external testing. • 3D mixed CNN automates CT reading prioritization effectively and speeds up bowel obstruction diagnosis.
Collapse
Affiliation(s)
- Quentin Vanderbecq
- Department of Radiology, AP-HP.Sorbonne, Saint Antoine Hospital, 184 Rue du Faubourg Saint-Antoine, 75012, Paris, France.
- UMR 7371, Université Sorbonne, CNRS, Inserm U114615, rue de l'École de Médecine, 75006, Paris, France.
| | - Maxence Gelard
- Université Paris-Saclay, CentraleSupélec, Gif-sur-Yvette, Inria, CVN, France
| | | | - Mathilde Wagner
- UMR 7371, Université Sorbonne, CNRS, Inserm U114615, rue de l'École de Médecine, 75006, Paris, France
- Department of Radiology, Hospital Pitié Salpêtrière, 47-83 Bd de l'Hôpital, 75013 Paris, Île-de-France, France
| | - Lionel Arrive
- Department of Radiology, AP-HP.Sorbonne, Saint Antoine Hospital, 184 Rue du Faubourg Saint-Antoine, 75012, Paris, France
| | - Marc Zins
- Department of Radiology, Hospital Paris Saint-Joseph, 185 Rue Raymond Losserand, 75014, Paris, Île-de-France, France
| | - Emilie Chouzenoux
- Université Paris-Saclay, CentraleSupélec, Gif-sur-Yvette, Inria, CVN, France
| |
Collapse
|
23
|
Del Gaizo AJ, Osborne TF, Shahoumian T, Sherrier R. Deep Learning to Detect Intracranial Hemorrhage in a National Teleradiology Program and the Impact on Interpretation Time. Radiol Artif Intell 2024; 6:e240067. [PMID: 39017032 PMCID: PMC11427938 DOI: 10.1148/ryai.240067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 06/11/2024] [Accepted: 06/25/2024] [Indexed: 07/18/2024]
Abstract
The diagnostic performance of an artificial intelligence (AI) clinical decision support solution for acute intracranial hemorrhage (ICH) detection was assessed in a large teleradiology practice. The impact on radiologist read times and system efficiency was also quantified. A total of 61 704 consecutive noncontrast head CT examinations were retrospectively evaluated. System performance was calculated along with mean and median read times for CT studies obtained before (baseline, pre-AI period; August 2021 to May 2022) and after (post-AI period; January 2023 to February 2024) AI implementation. The AI solution had a sensitivity of 75.6%, specificity of 92.1%, accuracy of 91.7%, prevalence of 2.70%, and positive predictive value of 21.1%. Of the 56 745 post-AI CT scans with no bleed identified by a radiologist, examinations falsely flagged as suspected ICH by the AI solution (n = 4464) took an average of 9 minutes 40 seconds (median, 8 minutes 7 seconds) to interpret as compared with 8 minutes 25 seconds (median, 6 minutes 48 seconds) for unremarkable CT scans before AI (n = 49 007) (P < .001) and 8 minutes 38 seconds (median, 6 minutes 53 seconds) after AI when ICH was not suspected by the AI solution (n = 52 281) (P < .001). CT scans with no bleed identified by the AI but reported as positive for ICH by the radiologist (n = 384) took an average of 14 minutes 23 seconds (median, 13 minutes 35 seconds) to interpret as compared with 13 minutes 34 seconds (median, 12 minutes 30 seconds) for CT scans correctly reported as a bleed by the AI (n = 1192) (P = .04). With lengthened read times for falsely flagged examinations, system inefficiencies may outweigh the potential benefits of using the tool in a high volume, low prevalence environment. Keywords: Artificial Intelligence, Intracranial Hemorrhage, Read Time, Report Turnaround Time, System Efficiency Supplemental material is available for this article. © RSNA, 2024.
Collapse
Affiliation(s)
- Andrew James Del Gaizo
- From the VA National Teleradiology Program, 795 Willow Rd, Bldg 3342,
Menlo Park, CA 94025 (A.J.D.G., R.S.); VA Palo Alto Health Care System, Palo
Alto, Calif (T.F.O.); Department of Radiology, Stanford University School of
Medicine, Stanford, Calif (T.F.O.); and VA Health Solutions, Patient Care
Services, Washington, DC (T.S.)
| | - Thomas F. Osborne
- From the VA National Teleradiology Program, 795 Willow Rd, Bldg 3342,
Menlo Park, CA 94025 (A.J.D.G., R.S.); VA Palo Alto Health Care System, Palo
Alto, Calif (T.F.O.); Department of Radiology, Stanford University School of
Medicine, Stanford, Calif (T.F.O.); and VA Health Solutions, Patient Care
Services, Washington, DC (T.S.)
| | - Troy Shahoumian
- From the VA National Teleradiology Program, 795 Willow Rd, Bldg 3342,
Menlo Park, CA 94025 (A.J.D.G., R.S.); VA Palo Alto Health Care System, Palo
Alto, Calif (T.F.O.); Department of Radiology, Stanford University School of
Medicine, Stanford, Calif (T.F.O.); and VA Health Solutions, Patient Care
Services, Washington, DC (T.S.)
| | - Robert Sherrier
- From the VA National Teleradiology Program, 795 Willow Rd, Bldg 3342,
Menlo Park, CA 94025 (A.J.D.G., R.S.); VA Palo Alto Health Care System, Palo
Alto, Calif (T.F.O.); Department of Radiology, Stanford University School of
Medicine, Stanford, Calif (T.F.O.); and VA Health Solutions, Patient Care
Services, Washington, DC (T.S.)
| |
Collapse
|
24
|
Hembroff G, Klochko C, Craig J, Changarnkothapeecherikkal H, Loi RQ. Improved Automated Quality Control of Skeletal Wrist Radiographs Using Deep Multitask Learning. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01220-9. [PMID: 39187704 DOI: 10.1007/s10278-024-01220-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 07/17/2024] [Accepted: 07/29/2024] [Indexed: 08/28/2024]
Abstract
Radiographic quality control is an integral component of the radiology workflow. In this study, we developed a convolutional neural network model tailored for automated quality control, specifically designed to detect and classify key attributes of wrist radiographs including projection, laterality (based on the right/left marker), and the presence of hardware and/or casts. The model's primary objective was to ensure the congruence of results with image requisition metadata to pass the quality assessment. Using a dataset of 6283 wrist radiographs from 2591 patients, our multitask-capable deep learning model based on DenseNet 121 architecture achieved high accuracy in classifying projections (F1 Score of 97.23%), detecting casts (F1 Score of 97.70%), and identifying surgical hardware (F1 Score of 92.27%). The model's performance in laterality marker detection was lower (F1 Score of 82.52%), particularly for partially visible or cut-off markers. This paper presents a comprehensive evaluation of our model's performance, highlighting its strengths, limitations, and the challenges encountered during its development and implementation. Furthermore, we outline planned future research directions aimed at refining and expanding the model's capabilities for improved clinical utility and patient care in radiographic quality control.
Collapse
Affiliation(s)
- Guy Hembroff
- Department of Applied Computing, Michigan Technological University, 1400 Townsend Drive, Houghton, MI, 49931, USA.
| | - Chad Klochko
- Department of Radiology, Division of Musculoskeletal Radiology, Henry Ford Hospital, 2799 West Grand Boulevard, Detroit, MI, 48202, USA
| | - Joseph Craig
- Department of Radiology, Division of Musculoskeletal Radiology, Henry Ford Hospital, 2799 West Grand Boulevard, Detroit, MI, 48202, USA
| | | | - Richard Q Loi
- Department of Radiology, Division of Musculoskeletal Radiology, Henry Ford Hospital, 2799 West Grand Boulevard, Detroit, MI, 48202, USA
| |
Collapse
|
25
|
Plesner LL, Müller FC, Brejnebøl MW, Krag CH, Laustrup LC, Rasmussen F, Nielsen OW, Boesen M, Andersen MB. Using AI to Identify Unremarkable Chest Radiographs for Automatic Reporting. Radiology 2024; 312:e240272. [PMID: 39162628 DOI: 10.1148/radiol.240272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/21/2024]
Abstract
Background Radiology practices have a high volume of unremarkable chest radiographs and artificial intelligence (AI) could possibly improve workflow by providing an automatic report. Purpose To estimate the proportion of unremarkable chest radiographs, where AI can correctly exclude pathology (ie, specificity) without increasing diagnostic errors. Materials and Methods In this retrospective study, consecutive chest radiographs in unique adult patients (≥18 years of age) were obtained January 1-12, 2020, at four Danish hospitals. Exclusion criteria included insufficient radiology reports or AI output error. Two thoracic radiologists, who were blinded to AI output, labeled chest radiographs as "remarkable" or "unremarkable" based on predefined unremarkable findings (reference standard). Radiology reports were classified similarly. A commercial AI tool was adapted to output a chest radiograph "remarkableness" probability, which was used to calculate specificity at different AI sensitivities. Chest radiographs with missed findings by AI and/or the radiology report were graded by one thoracic radiologist as critical, clinically significant, or clinically insignificant. Paired proportions were compared using the McNemar test. Results A total of 1961 patients were included (median age, 72 years [IQR, 58-81 years]; 993 female), with one chest radiograph per patient. The reference standard labeled 1231 of 1961 chest radiographs (62.8%) as remarkable and 730 of 1961 (37.2%) as unremarkable. At 99.9%, 99.0%, and 98.0% sensitivity, the AI had a specificity of 24.5% (179 of 730 radiographs [95% CI: 21, 28]), 47.1% (344 of 730 radiographs [95% CI: 43, 51]), and 52.7% (385 of 730 radiographs [95% CI: 49, 56]), respectively. With the AI fixed to have a similar sensitivity as radiology reports (87.2%), the missed findings of AI and reports had 2.2% (27 of 1231 radiographs) and 1.1% (14 of 1231 radiographs) classified as critical (P = .01), 4.1% (51 of 1231 radiographs) and 3.6% (44 of 1231 radiographs) classified as clinically significant (P = .46), and 6.5% (80 of 1231) and 8.1% (100 of 1231) classified as clinically insignificant (P = .11), respectively. At sensitivities greater than or equal to 95.4%, the AI tool exhibited less than or equal to 1.1% critical misses. Conclusion A commercial AI tool used off-label could correctly exclude pathology in 24.5%-52.7% of all unremarkable chest radiographs at greater than or equal to 98% sensitivity. The AI had equal or lower rates of critical misses than radiology reports at sensitivities greater than or equal to 95.4%. These results should be confirmed in a prospective study. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Yoon and Hwang in this issue.
Collapse
Affiliation(s)
- Louis Lind Plesner
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., C.H.K., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Herlev, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (M.W.B., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Felix C Müller
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., C.H.K., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Herlev, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (M.W.B., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Mathias W Brejnebøl
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., C.H.K., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Herlev, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (M.W.B., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Christian Hedeager Krag
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., C.H.K., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Herlev, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (M.W.B., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Lene C Laustrup
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., C.H.K., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Herlev, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (M.W.B., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Finn Rasmussen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., C.H.K., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Herlev, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (M.W.B., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Olav Wendelboe Nielsen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., C.H.K., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Herlev, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (M.W.B., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Mikael Boesen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., C.H.K., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Herlev, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (M.W.B., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| | - Michael B Andersen
- From the Department of Radiology, Herlev and Gentofte Hospital, Borgmester Ib, Juuls vej 1 Herlev, Copenhagen 2730, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., L.C.L., M.B.A.); Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark (L.L.P., M.W.B., C.H.K., M.B., M.B.A.); Radiological Artificial Intelligence Testcenter, RAIT.dk, Herlev, Denmark (L.L.P., F.C.M., M.W.B., C.H.K., M.B., M.B.A.); Department of Radiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (M.W.B., M.B.); Department of Radiology, Aarhus University Hospital, Aarhus, Denmark (F.R.); and Department of Cardiology, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark (O.W.N.)
| |
Collapse
|
26
|
Troupis CJ, Knight RAH, Lau KKP. What is the appropriate measure of radiology workload: Study or image numbers? J Med Imaging Radiat Oncol 2024; 68:530-539. [PMID: 38837555 DOI: 10.1111/1754-9485.13713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 05/15/2024] [Indexed: 06/07/2024]
Abstract
INTRODUCTION Previous studies assessing the volume of radiological studies rarely considered the corresponding number of images. We aimed to quantify the increases in study and image numbers per radiologist in a tertiary healthcare network to better understand the demands on imaging services. METHODS Using the Picture Archiving and Communication System (PACS), the number of images per study was obtained for all diagnostic studies reported by in-house radiologists at a tertiary healthcare network in Melbourne, Australia, between January 2009 and December 2022. Payroll data was used to obtain the numbers of full-time equivalent radiologists. RESULTS Across all modalities, there were 4,462,702 diagnostic studies and 1,116,311,209 images. The number of monthly studies increased from 17,235 to 35,152 (104%) over the study period. The number of monthly images increased from 1,120,832 to 13,353,056 (1091%), with computed tomography (CT) showing the greatest absolute increase of 9,395,653 images per month (1476%). There was no increase in the monthly studies per full-time equivalent radiologist; however, the number of monthly image slices per radiologist increased 399%, from 48,781 to 243,518 (Kendall Tau correlation coefficient 0.830, P-value < 0.0001). CONCLUSION The number of monthly images per radiologist increased substantially from 2009 to 2022, despite a relatively constant number of monthly studies per radiologist. Our study suggests that using the number of studies as an isolated fundamental data set underestimates the true radiologist's workload. We propose that the increased volume of images examined by individual radiologists may more appropriately reflect true work demand and may add more weight to future workforce planning.
Collapse
Affiliation(s)
- Christopher John Troupis
- The Royal Melbourne Hospital, Parkville, Victoria, Australia
- Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Parkville, Victoria, Australia
| | | | - Kenneth Kwok-Pan Lau
- Monash Imaging, Monash Health, Clayton, Victoria, Australia
- School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, Victoria, Australia
- Sir Peter MacCallum Department of Oncology, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, 305 Grattan Street, 3050, Victoria, Australia
| |
Collapse
|
27
|
Lingam G, Shakir T, Kader R, Chand M. Role of artificial intelligence in colorectal cancer. Artif Intell Gastrointest Endosc 2024; 5:90723. [DOI: 10.37126/aige.v5.i2.90723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 04/10/2024] [Accepted: 04/19/2024] [Indexed: 05/11/2024] Open
Abstract
The sphere of artificial intelligence (AI) is ever expanding. Applications for clinical practice have been emerging over recent years. Although its uptake has been most prominent in endoscopy, this represents only one aspect of holistic patient care. There are a multitude of other potential avenues in which gastrointestinal care may be involved. We aim to review the role of AI in colorectal cancer as a whole. We performed broad scoping and focused searches of the applications of AI in the field of colorectal cancer. All trials including qualitative research were included from the year 2000 onwards. Studies were grouped into pre-operative, intra-operative and post-operative aspects. Pre-operatively, the major use is with endoscopic recognition. Colonoscopy has embraced the use for human derived classifications such as Narrow-band Imaging International Colorectal Endoscopic, Japan Narrow-band Imaging Expert Team, Paris and Kudo. However, novel detection and diagnostic methods have arisen from advances in AI classification. Intra-operatively, adjuncts such as image enhanced identification of structures and assessment of perfusion have led to improvements in clinical outcomes. Post-operatively, monitoring and surveillance have taken strides with potential socioeconomic and environmental savings. The uses of AI within the umbrella of colorectal surgery are multiple. We have identified existing technologies which are already augmenting cancer care. The future applications are exciting and could at least match, if not surpass human standards.
Collapse
Affiliation(s)
- Gita Lingam
- Department of General Surgery, Princess Alexandra Hospital, Harlow CM20 1QX, United Kingdom
| | - Taner Shakir
- Department of Colorectal Surgery, University College London, London W1W 7TY, United Kingdom
| | - Rawen Kader
- Department of Gastroenterology, University College London, University College London Hospitals Nhs Foundation Trust, London W1B, United Kingdom
| | - Manish Chand
- Gastroenterological Intervention Centre, University College London, London W1W 7TS, United Kingdom
| |
Collapse
|
28
|
Alami Idrissi Y, Virador GM, Singh RB, Rao D, Stone JA, Sandhu SJS. Imaging 3.0: A scoping review. Curr Probl Diagn Radiol 2024; 53:399-404. [PMID: 38242771 DOI: 10.1067/j.cpradiol.2024.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 01/16/2024] [Indexed: 01/21/2024]
Abstract
We aim to provide a comprehensive summary of the current body of literature concerning the Imaging 3.0 initiative and its implications for patient care within the field of radiology. We offer a thorough analysis of the literature pertaining to the Imaging 3.0 initiative, emphasizing the practical application of the five pillars of the program, their cost-effectiveness, and their benefits in patient management. By doing so, we hope to illustrate the impact the Imaging 3.0 Initiative can have on the future of radiology and patient care.
Collapse
Affiliation(s)
- Yassine Alami Idrissi
- Hillman Cancer Center, University of Pittsburgh Medical Center, 5030 Centre avenue, Pittsburgh, PA 15213, United States.
| | - Gabriel M Virador
- Department of Internal Medicine, Medstar Union Memorial Hospital, Baltimore, MD, United States
| | - Rahul B Singh
- Department of Internal Medicine, New York City Health and Hospitals/South Brooklyn Health, Brooklyn, NY, United States
| | - Dinesh Rao
- Department of Radiology, Mayo Clinic, Jacksonville, FL, United States
| | - Jeffrey A Stone
- Department of Radiology, Mayo Clinic, Jacksonville, FL, United States
| | | |
Collapse
|
29
|
Ivanovic V, Broadhead K, Chang YM, Hamer JF, Beck R, Hacein-Bey L, Qi L. Shift Volume Directly Impacts Neuroradiology Error Rate at a Large Academic Medical Center: The Case for Volume Limits. AJNR Am J Neuroradiol 2024; 45:374-378. [PMID: 38238099 PMCID: PMC11288559 DOI: 10.3174/ajnr.a8119] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 12/18/2023] [Indexed: 04/10/2024]
Abstract
BACKGROUND AND PURPOSE Unlike in Europe and Japan, guidelines or recommendations from specialized radiological societies on workflow management and adaptive intervention to reduce error rates are currently lacking in the United States. This study of neuroradiologic reads at a large US academic medical center, which may hopefully contribute to this discussion, found a direct relationship between error rate and shift volume. MATERIALS AND METHODS CT and MR imaging reports from our institution's Neuroradiology Quality Assurance database (years 2014-2020) were searched for attending physician errors. Data were collected on shift volume specific error rates per 1000 interpreted studies and RADPEER scores. Optimal cutoff points for 2, 3 and 4 groups of shift volumes were computed along with subgroups' error rates. RESULTS A total of 643 errors were found, 91.7% of which were clinically significant (RADPEER 2b, 3b). The overall error rate (errors/1000 examinations) was 2.36. The best single shift volume cutoff point generated 2 groups: ≤ 26 studies (error rate 1.59) and > 26 studies (2.58; OR: 1.63, P < .001). The best 2 shift volume cutoff points generated 3 shift volume groups: ≤ 19 (1.34), 20-28 (1.88; OR: 1.4, P = .1) and ≥ 29 (2.6; OR: 1.94, P < .001). The best 3 shift volume cutoff points generated 4 groups: ≤ 24 (1.59), 25-66 (2.44; OR: 1.54, P < .001), 67-90 (3.03; OR: 1.91, P < .001), and ≥ 91 (2.07; OR: 1.30, P = .25). The group with shift volume ≥ 91 had a limited sample size. CONCLUSIONS Lower shift volumes yielded significantly lower error rates. The lowest error rates were observed with shift volumes that were limited to 19-26 studies. Error rates at shift volumes between 67-90 studies were 226% higher, compared with the error rate at shift volumes of ≤ 19 studies.
Collapse
Affiliation(s)
- Vladimir Ivanovic
- From the Department of Radiology, Section of Neuroradiology (V.I., J.F.H., R.B.), Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Kenneth Broadhead
- Department of Statistics (K.B.), Colorado State University, Fort Collins, Colorado
| | - Yu-Ming Chang
- Department of Radiology, Section of Neuroradiology (Y.-M.C.), Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - John F Hamer
- From the Department of Radiology, Section of Neuroradiology (V.I., J.F.H., R.B.), Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Ryan Beck
- From the Department of Radiology, Section of Neuroradiology (V.I., J.F.H., R.B.), Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Lotfi Hacein-Bey
- Department of Radiology, Section of Neuroradiology (L.H.-B.), University of California Davis Medical Center, Sacramento, California
| | - Lihong Qi
- Department of Public Health Sciences (L.Q.), School of Medicine, University of California Davis, Davis, California
| |
Collapse
|
30
|
Gertz RJ, Dratsch T, Bunck AC, Lennartz S, Iuga AI, Hellmich MG, Persigehl T, Pennig L, Gietzen CH, Fervers P, Maintz D, Hahnfeldt R, Kottlors J. Potential of GPT-4 for Detecting Errors in Radiology Reports: Implications for Reporting Accuracy. Radiology 2024; 311:e232714. [PMID: 38625012 DOI: 10.1148/radiol.232714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
Background Errors in radiology reports may occur because of resident-to-attending discrepancies, speech recognition inaccuracies, and large workload. Large language models, such as GPT-4 (ChatGPT; OpenAI), may assist in generating reports. Purpose To assess effectiveness of GPT-4 in identifying common errors in radiology reports, focusing on performance, time, and cost-efficiency. Materials and Methods In this retrospective study, 200 radiology reports (radiography and cross-sectional imaging [CT and MRI]) were compiled between June 2023 and December 2023 at one institution. There were 150 errors from five common error categories (omission, insertion, spelling, side confusion, and other) intentionally inserted into 100 of the reports and used as the reference standard. Six radiologists (two senior radiologists, two attending physicians, and two residents) and GPT-4 were tasked with detecting these errors. Overall error detection performance, error detection in the five error categories, and reading time were assessed using Wald χ2 tests and paired-sample t tests. Results GPT-4 (detection rate, 82.7%;124 of 150; 95% CI: 75.8, 87.9) matched the average detection performance of radiologists independent of their experience (senior radiologists, 89.3% [134 of 150; 95% CI: 83.4, 93.3]; attending physicians, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; residents, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; P value range, .522-.99). One senior radiologist outperformed GPT-4 (detection rate, 94.7%; 142 of 150; 95% CI: 89.8, 97.3; P = .006). GPT-4 required less processing time per radiology report than the fastest human reader in the study (mean reading time, 3.5 seconds ± 0.5 [SD] vs 25.1 seconds ± 20.1, respectively; P < .001; Cohen d = -1.08). The use of GPT-4 resulted in lower mean correction cost per report than the most cost-efficient radiologist ($0.03 ± 0.01 vs $0.42 ± 0.41; P < .001; Cohen d = -1.12). Conclusion The radiology report error detection rate of GPT-4 was comparable with that of radiologists, potentially reducing work hours and cost. © RSNA, 2024 See also the editorial by Forman in this issue.
Collapse
Affiliation(s)
- Roman Johannes Gertz
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Thomas Dratsch
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Alexander Christian Bunck
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Simon Lennartz
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Andra-Iza Iuga
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Martin Gunnar Hellmich
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Thorsten Persigehl
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Lenhard Pennig
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Carsten Herbert Gietzen
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Philipp Fervers
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - David Maintz
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Robert Hahnfeldt
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| | - Jonathan Kottlors
- From the Institute of Diagnostic and Interventional Radiology (R.J.G., T.D., A.C.B., S.L., A.I.I., T.P., L.P., C.H.G., P.F., D.M., R.H., J.K.) and Institute of Medical Statistics and Bioinformatics (M.G.H.), Faculty of Medicine, University Hospital Cologne, University of Cologne, Kerpener Strasse 62, 50937 Cologne, Germany
| |
Collapse
|
31
|
Chung R, Demers JP, Tiberio R, Savage CA, McNulty F, Stout M, Kambadakone A, Gilman MD, Sharma A, Alkasab TK. Implementation of an Institution-Wide Rules-Based Automated CT Protocoling System. AJR Am J Roentgenol 2024; 222:e2329806. [PMID: 38230904 DOI: 10.2214/ajr.23.29806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2024]
Abstract
BACKGROUND. Examination protocoling is a noninterpretive task that increases radiologists' workload and can cause workflow inefficiencies. OBJECTIVE. The purpose of this study was to evaluate effects of an automated CT protocoling system on examination process times and protocol error rates. METHODS. This retrospective study included 317,597 CT examinations (mean age, 61.8 ± 18.1 [SD] years; male, 161,125; female, 156,447; unspecified sex, 25) from July 2020 to June 2022. A rules-based automated protocoling system was implemented institution-wide; the system evaluated all CT orders in the EHR and assigned a protocol or directed the order for manual radiologist protocoling. The study period comprised pilot (July 2020 to December 2020), implementation (January 2021 to December 2021), and postimplementation (January 2022 to June 2022) phases. Proportions of automatically protocoled examinations were summarized. Process times were recorded. Protocol error rates were assessed by counts of quality improvement (QI) reports and examination recalls and comparison with retrospectively assigned protocols in 450 randomly selected examinations. RESULTS. Frequency of automatic protocoling was 19,366/70,780 (27.4%), 68,875/163,068 (42.2%), and 54,045/83,749 (64.5%) in pilot, implementation, and postimplementation phases, respectively (p < .001). Mean (± SD) times from order entry to protocol assignment for automatically and manually protocoled examinations for emergency department examinations were 0.2 ± 18.2 and 2.1 ± 69.7 hours, respectively; mean inpatient examination times were 0.5 ± 50.0 and 3.5 ± 105.5 hours; and mean outpatient examination times were 361.7 ± 1165.5 and 1289.9 ± 2050.9 hours (all p < .001). Mean (± SD) times from order entry to examination completion for automatically and manually protocoled examinations for emergency department examinations were 2.6 ± 38.6 and 4.2 ± 73.0 hours, respectively (p < .001); for inpatient examinations were 6.3 ± 74.6 and 8.7 ± 109.3 hours (p = .001); and for outpatient examinations were 1367.2 ± 1795.8 and 1471.8 ± 2118.3 hours (p < .001). In the three phases, there were three, 19, and 25 QI reports and zero, one, and three recalls, respectively, for automatically protocoled examinations, versus nine, 19, and five QI reports and one, seven, and zero recalls for manually protocoled examinations. Retrospectively assigned protocols were concordant with 212/214 (99.1%) of automatically protocoled versus 233/236 (98.7%) of manually protocoled examinations. CONCLUSION. The automated protocoling system substantially reduced radiologists' protocoling workload and decreased times from order entry to protocol assignment and examination completion; protocol errors and recalls were infrequent. CLINICAL IMPACT. The system represents a solution for reducing radiologists' time spent performing noninterpretive tasks and improving care efficiency.
Collapse
Affiliation(s)
- Ryan Chung
- Department of Radiology, Division of Abdominal Imaging, Massachusetts General Hospital, 55 Fruit St, White 270, Boston, MA 02114
| | - John P Demers
- Department of Radiology, Massachusetts General Hospital, Boston, MA
| | - Roberta Tiberio
- Department of Radiology, Massachusetts General Hospital, Boston, MA
| | - Cristy A Savage
- Department of Radiology, CT Operations, Massachusetts General Hospital, Boston, MA
| | - Frederick McNulty
- Department of Radiology, CT Operations, Massachusetts General Hospital, Boston, MA
| | - Markus Stout
- Department of Radiology, Massachusetts General Hospital, Boston, MA
| | - Avinash Kambadakone
- Department of Radiology, Division of Abdominal Imaging, Massachusetts General Hospital, 55 Fruit St, White 270, Boston, MA 02114
| | - Matthew D Gilman
- Department of Radiology, Division of Thoracic Imaging and Intervention, Massachusetts General Hospital, Boston, MA
| | - Amita Sharma
- Department of Radiology, Division of Thoracic Imaging and Intervention, Massachusetts General Hospital, Boston, MA
| | - Tarik K Alkasab
- Department of Radiology, Division of Emergency Imaging, Massachusetts General Hospital, Boston, MA
| |
Collapse
|
32
|
Polzer C, Yilmaz E, Meyer C, Jang H, Jansen O, Lorenz C, Bürger C, Glüer CC, Sedaghat S. AI-based automated detection and stability analysis of traumatic vertebral body fractures on computed tomography. Eur J Radiol 2024; 173:111364. [PMID: 38364589 DOI: 10.1016/j.ejrad.2024.111364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 12/29/2023] [Accepted: 02/08/2024] [Indexed: 02/18/2024]
Abstract
PURPOSE We developed and tested a neural network for automated detection and stability analysis of vertebral body fractures on computed tomography (CT). MATERIALS AND METHODS 257 patients who underwent CT were included in this Institutional Review Board (IRB) approved study. 463 fractured and 1883 non-fractured vertebral bodies were included, with 190 fractures unstable. Two readers identified vertebral body fractures and assessed their stability. A combination of a Hierarchical Convolutional Neural Network (hNet) and a fracture Classification Network (fNet) was used to build a neural network for the automated detection and stability analysis of vertebral body fractures on CT. Two final test settings were chosen: one with vertebral body levels C1/2 included and one where they were excluded. RESULTS The mean age of the patients was 68 ± 14 years. 140 patients were female. The network showed a slightly higher diagnostic performance when excluding C1/2. Accordingly, the network was able to distinguish fractured and non-fractured vertebral bodies with a sensitivity of 75.8 % and a specificity of 80.3 %. Additionally, the network determined the stability of the vertebral bodies with a sensitivity of 88.4 % and a specificity of 80.3 %. The AUC was 87 % and 91 % for fracture detection and stability analysis, respectively. The sensitivity of our network in indicating the presence of at least one fracture / one unstable fracture within the whole spine achieved values of 78.7 % and 97.2 %, respectively, when excluding C1/2. CONCLUSION The developed neural network can automatically detect vertebral body fractures and evaluate their stability concurrently with a high diagnostic performance.
Collapse
Affiliation(s)
- Constanze Polzer
- Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Eren Yilmaz
- Section Biomedical Imaging, Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany; Department of Computer Science, Ostfalia University of Applied Sciences, Wolfenbüttel, Germany
| | - Carsten Meyer
- Department of Computer Science, Ostfalia University of Applied Sciences, Wolfenbüttel, Germany; Department of Computer Science, Faculty of Engineering, Kiel University, Kiel, Germany
| | - Hyungseok Jang
- Department of Radiology, University of California San Diego, San Diego, USA
| | - Olav Jansen
- Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | | | | | - Claus-Christian Glüer
- Section Biomedical Imaging, Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Sam Sedaghat
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
33
|
Velleman T, Hein S, Dierckx RAJO, Noordzij W, Kwee TC. Reading room assistants to reduce workload and interruptions of radiology residents during on-call hours: Initial evaluation. Eur J Radiol 2024; 173:111381. [PMID: 38428253 DOI: 10.1016/j.ejrad.2024.111381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/23/2024] [Accepted: 02/16/2024] [Indexed: 03/03/2024]
Abstract
PURPOSE To determine how much timesaving and reduction of interruptions reading room assistants can provide by taking over non-image interpretation tasks (NITs) from radiology residents during on-call hours. METHODS Reading room assistants are medical students who were trained to take over NITs from radiology residents (e.g. answering telephone calls, administrative tasks and logistics) to reduce residents' workload during on-call hours. Reading room assistants' and residents' activities were tracked during 6 weekend dayshifts in a tertiary care academic center (with more than 2.5 million inhabitants in its catchment area) between 10 a.m. and 5p.m. (7-hour shift, 420 min), and time spent on each activity was recorded. RESULTS Reading room assistants spent the most time on the following timesaving activities for residents: answering incoming (41 min, 19%) and outgoing telephone calls (35 min, 16%), ultrasound machine related activities (19 min, 9%) and paramedical assistance such as supporting residents during ultrasound guided procedures and with patients (17 min, 8%). Reading room assistants saved 132 min of residents' time by taking over NITs while also spending circa 31 min consulting the resident, resulting in a net timesaving of 101 min (24%) during a 7-hour shift. The reading room assistants also prevented residents from being interrupted, at a mean of 18 times during the 7-hour shift. CONCLUSION This study shows that the implementation of reading room assistants to radiology on-call hours could provide a timesaving for residents and also reduce the number of times residents are being interrupted during their work.
Collapse
Affiliation(s)
- Ton Velleman
- Department of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands.
| | - Sandra Hein
- Department of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Rudi A J O Dierckx
- Department of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Walter Noordzij
- Department of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Thomas C Kwee
- Department of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
34
|
Lysø EH, Hesjedal MB, Skolbekken JA, Solbjør M. Men's sociotechnical imaginaries of artificial intelligence for prostate cancer diagnostics - A focus group study. Soc Sci Med 2024; 347:116771. [PMID: 38537333 DOI: 10.1016/j.socscimed.2024.116771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 03/05/2024] [Accepted: 03/08/2024] [Indexed: 04/20/2024]
Abstract
Artificial intelligence (AI) is increasingly used for diagnostic purposes in cancer care. Prostate cancer is one of the most prevalent cancers affecting men worldwide, but current diagnostic approaches have limitations in terms of specificity and sensitivity. Using AI to interpret MR images in prostate cancer diagnostics shows promising results, but raises questions about implementation, user acceptance, trust, and doctor-patient communication. Drawing on approaches from the sociology of expectations and theories about sociotechnical imaginaries, we explore men's expectations of artificial intelligence for prostate cancer diagnostics. We conducted ten focus groups with 48 men aged 54-85 in Norway with various experiences of prostate cancer diagnostics. Five groups of men had been treated for prostate cancer, one group was on active surveillance, two groups had been through prostate cancer diagnostics without having a diagnosis, and two groups of men had no experience with prostate cancer diagnostics or treatment. Data was subject to reflexive thematic analysis. Our analysis suggests that men's expectations of AI for prostate cancer diagnostics come from two perspectives: Technology-centered expectations that build on their conceptions of AI's form and agency, and human-centered expectations of AI that build on their perceptions of patient-professional relationships and decision-making processes. These two perspectives are intertwined in three imaginaries of AI: The tool imaginary, the advanced machine imaginary, and the intelligence imaginary - each carrying distinct expectations and ideas of technologies and humans' role in decision-making processes. These expectations are multifaceted and simultaneously optimistic and pessimistic; while AI is expected to improve the accuracy of cancer diagnoses and facilitate more personalized medicine, AI is also expected to threaten interpersonal and communicational relationships between patients and healthcare professionals, and the maintenance of trust in these relationships. This emphasizes how AI cannot be implemented without caution about maintaining human healthcare relationships.
Collapse
Affiliation(s)
- Emilie Hybertsen Lysø
- Norwegian University of Science and Technology, Department of Public Health and Nursing, Håkon Jarls gate 11, 7030, Trondheim, Norway.
| | - Maria Bårdsen Hesjedal
- Norwegian University of Science and Technology, Department of Public Health and Nursing, Håkon Jarls gate 11, 7030, Trondheim, Norway
| | - John-Arne Skolbekken
- Norwegian University of Science and Technology, Department of Public Health and Nursing, Håkon Jarls gate 11, 7030, Trondheim, Norway
| | - Marit Solbjør
- Norwegian University of Science and Technology, Department of Public Health and Nursing, Håkon Jarls gate 11, 7030, Trondheim, Norway
| |
Collapse
|
35
|
Ko CH, Chien LN, Chiu YT, Hsu HH, Wong HF, Chan WP. Demands for medical imaging and workforce Size: A nationwide population-based Study, 2000-2020. Eur J Radiol 2024; 172:111330. [PMID: 38290203 DOI: 10.1016/j.ejrad.2024.111330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 12/22/2023] [Accepted: 01/18/2024] [Indexed: 02/01/2024]
Abstract
PURPOSE The aim of this study was to investigate associations between workforce and workload among radiologists in Taiwan. MATERIALS AND METHODS Data for the period 2000-2020 describing the demand for imaging services and radiologists have been obtained from databases and statistical reports of the Ministry of Health and Welfare. The future demand for radiologists was based on Taiwanese people aged 40 and over. RESULTS The workforce of Taiwan's radiologists has increased by 6 % annually over the past 20 years (from 450 to 993), performing 2125, 3202 and 3620 monthly examinations (mainly conventional radiography and CT) in medical centers, regional hospitals and district hospitals. Between 2000 and 2020, the use of CT and MRI increased by more than 3.5 times. Demand for interventional radiology also increased by 1.77 times, 2.25 times, and 5 times, respectively. To maintain this volume of services in 2040, at least 1168 radiologists are needed, about 1.18 times more in 2020. CONCLUSION Taiwan has 2.4 to 2.9 times fewer radiologists than the United States and 3 times fewer than Europe, while the annual workload is approximately 2 to 3.4 times greater than that of the United States and 1.4 to 2.5 times greater than that of the United Kingdom. This report may serve as a reference for policy makers who address the challenges of the growing workload among radiologists in countries of similar situations.
Collapse
Affiliation(s)
- Chih-Hsiang Ko
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan; Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei 11031, Taiwan
| | - Li-Nien Chien
- Institute of Health and Welfare Policy, National Yang Ming Chiao Tung University, Taipei City 11221, Taiwan
| | - Yu-Ting Chiu
- School of Health Care Administration, College of Management, Taipei Medical University, New Taipei City 235, Taiwan
| | - Hsian-He Hsu
- Department of Radiology, Tri-Service General Hospital and National Defense Medical Center, Taipei 11490, Taiwan
| | - Ho-Fai Wong
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital, Chang Gung University, Taoyuan 333423, Taiwan
| | - Wing P Chan
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan; Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei 11031, Taiwan.
| |
Collapse
|
36
|
Toxopeus R, Kasalak Ö, Yakar D, Noordzij W, Dierckx RAJO, Kwee TC. Is work overload associated with diagnostic errors on 18F-FDG-PET/CT? Eur J Nucl Med Mol Imaging 2024; 51:1079-1084. [PMID: 38030745 DOI: 10.1007/s00259-023-06543-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 11/22/2023] [Indexed: 12/01/2023]
Abstract
PURPOSE To determine the association between workload and diagnostic errors on 18F-FDG-PET/CT. MATERIALS AND METHODS This study included 103 18F-FDG-PET/CT scans with a diagnostic error that was corrected with an addendum between March 2018 and July 2023. All scans were performed at a tertiary care center. The workload of each nuclear medicine physician or radiologist who authorized the 18F-FDG-PET/CT report was determined on the day the diagnostic error was made and normalized for his or her own average daily production (workloadnormalized). A workloadnormalized of more than 100% indicates that the nuclear medicine physician or radiologist had a relative work overload, while a value of less than 100% indicates a relative work underload on the day the diagnostic error was made. The time of the day the diagnostic error was made was also recorded. Workloadnormalized was compared to 100% using a signed rank sum test, with the hypothesis that it would significantly exceed 100%. A Mann-Kendall test was performed to test the hypothesis that diagnostic errors would increase over the course of the day. RESULTS Workloadnormalized (median of 121%, interquartile range: 71 to 146%) on the days the diagnostic errors were made was significantly higher than 100% (P = 0.014). There was no significant upward trend in the frequency of diagnostic errors over the course of the day (Mann-Kendall tau = 0.05, P = 0.7294). CONCLUSION Work overload seems to be associated with diagnostic errors on 18F-FDG-PET/CT. Diagnostic errors were encountered throughout the entire working day, without any upward trend towards the end of the day.
Collapse
Affiliation(s)
- Romy Toxopeus
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Ömer Kasalak
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Derya Yakar
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Walter Noordzij
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Rudi A J O Dierckx
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Thomas C Kwee
- Medical Imaging Center, Departments of Radiology and Nuclear Medicine, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
37
|
Achangwa NR, Nierobisch N, Ludovichetti R, Negrão de Figueiredo G, Kupka M, De Vere-Tyndall A, Frauenfelder T, Kulcsar Z, Hainc N. Sustainable reduction of phone-call interruptions by 35% in a medical imaging department using an automatic voicemail and custom call redirection system. Curr Probl Diagn Radiol 2024; 53:246-251. [PMID: 38290903 DOI: 10.1067/j.cpradiol.2024.01.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/07/2023] [Accepted: 01/16/2024] [Indexed: 02/01/2024]
Abstract
BACKGROUND Have you ever been in the trenches of a complicated study only to be interrupted by a not-so urgent phone-call? We were, repeatedly- unfortunately. PURPOSE To increase productivity of radiologists by quantifying the main source of interruptions (phone-calls) to the workflow of radiologists, and too assess the implemented solution. MATERIALS AND METHODS To filter calls to the radiology consultant on duty, we introduced an automatic voicemail and custom call redirection system. Thus, instead of directly speaking with radiology consultants, clinicians were to first categorize their request and dial accordingly: 1. Inpatient requests, 2. Outpatient requests, 3. Directly speak with the consultant radiologist. Inpatient requests (1) and outpatient requests (2) were forwarded to MRI technologists or clerks, respectively. Calls were monitored in 15-minute increments continuously for an entire year (March 2022 until and including March 2023). Subsequently, both the frequency and category of requests were assessed. RESULTS 4803 calls were recorded in total: 3122 (65 %) were forwarded to a radiologist on duty. 870 (18.11 %) concerned inpatients, 274 (5.70 %) outpatients, 430 (8.95 %) dialed the wrong number, 107 (2.23 %) made no decision. Throughout the entire year the percentage of successfully avoided interruptions was relatively stable and fluctuated between low to high 30 % range (Mean per month 35 %, Median per month 34.45 %). CONCLUSIONS This is the first analysis of phone-call interruptions to consultant radiologists in an imaging department for 12 continuous months. More than 35 % of requests did not require the input of a specialist trained radiologist. Hence, installing an automated voicemail and custom call redirection system is a sustainable and simple solution to reduce phone-call interruptions by on average 35 % in radiology departments. This solution was well accepted by referring clinicians. The installation required a one-time investment of only 2h and did not cost any money.
Collapse
Affiliation(s)
- Ngwe Rawlings Achangwa
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland; Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Switzerland.
| | - Nathalie Nierobisch
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Riccardo Ludovichetti
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Giovanna Negrão de Figueiredo
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Michael Kupka
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Anthony De Vere-Tyndall
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Thomas Frauenfelder
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Switzerland
| | - Zsolt Kulcsar
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Nicolin Hainc
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| |
Collapse
|
38
|
Kwee TC, Kasalak Ö, Yakar D. Radiologist-patient communication of musculoskeletal ultrasonography results: a choice between added value and costs. Acta Radiol 2024; 65:267-272. [PMID: 34617452 DOI: 10.1177/02841851211044974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Literature on radiologist-patient communication of musculoskeletal ultrasonography (US) results is currently lacking. PURPOSE To investigate the patient's view on receiving the results from a radiologist after a musculoskeletal US examination, and the additional time required to provide such a service. MATERIAL AND METHODS This prospective study included 106 outpatients who underwent musculoskeletal US, and who were equally randomized to either receive or not receive the results from the radiologist directly after the examination. RESULTS In both randomization groups, all quality performance metrics (radiologist's friendliness, explanation, skill, concern for comfort, concern for patient questions/worries, overall rating of the examination, and likelihood of recommending the examination) received median scores of good/high to very good/very high. Patients who had received their US results from the radiologist rated the radiologist's explanation and concern for patient questions/worries as significantly higher (P = 0.009 and P = 0.002) than patients who had not. In both randomization groups, there were no significant differences between anxiety levels before and after the US examination (P = 0.222 and P = 1.000). Of the 48 responding patients, 46 (95.8%) rated a radiologist-patient discussion of US findings as important. US examinations with a radiologist-patient communication regarding US findings (median = 11.29 min) were significantly longer (P < 0.0001) than those without (median = 8.08 min). CONCLUSION Even without communicating musculoskeletal US results directly to patients, radiologists can still achieve high ratings from patients for their communication and empathy. Nevertheless, patient experience can be further enhanced if a radiologist adds this communication to the examination. However, this increases total examination time and therefore costs.
Collapse
Affiliation(s)
- Thomas C Kwee
- Medical Imaging Center, Department of Radiology, Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Ömer Kasalak
- Medical Imaging Center, Department of Radiology, Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Derya Yakar
- Medical Imaging Center, Department of Radiology, Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| |
Collapse
|
39
|
Guermazi A. AI is indeed helpful but it should always be monitored! Diagn Interv Imaging 2024; 105:83-84. [PMID: 38458733 DOI: 10.1016/j.diii.2024.02.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 02/22/2024] [Indexed: 03/10/2024]
Affiliation(s)
- Ali Guermazi
- Department of Radiology, Boston University School of Medicine, Boston, MA 02118, USA; Department of Radiology, VA Boston Healthcare System, West Roxbury, MA 02132, USA.
| |
Collapse
|
40
|
Payne DL, Xu X, Faraji F, John K, Pradas KF, Bernard VV, Bangiyev L, Prasanna P. Automated Detection of Cervical Spinal Stenosis and Cord Compression via Vision Transformer and Rules-Based Classification. AJNR Am J Neuroradiol 2024; 45:ajnr.A8141. [PMID: 38360785 PMCID: PMC11288556 DOI: 10.3174/ajnr.a8141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 12/15/2023] [Indexed: 02/17/2024]
Abstract
BACKGROUND AND PURPOSE Cervical spinal cord compression, defined as spinal cord deformity and severe narrowing of the spinal canal in the cervical region, can lead to severe clinical consequences, including intractable pain, sensory disturbance, paralysis, and even death, and may require emergent intervention to prevent negative outcomes. Despite the critical nature of cord compression, no automated tool is available to alert clinical radiologists to the presence of such findings. This study aims to demonstrate the ability of a vision transformer (ViT) model for the accurate detection of cervical cord compression. MATERIALS AND METHODS A clinically diverse cohort of 142 cervical spine MRIs was identified, 34% of which were normal or had mild stenosis, 31% with moderate stenosis, and 35% with cord compression. Utilizing gradient-echo images, slices were labeled as no cord compression/mild stenosis, moderate stenosis, or severe stenosis/cord compression. Segmentation of the spinal canal was performed and confirmed by neuroradiology faculty. A pretrained ViT model was fine-tuned to predict section-level severity by using a train:validation:test split of 60:20:20. Each examination was assigned an overall severity based on the highest level of section severity, with an examination labeled as positive for cord compression if ≥1 section was predicted in the severe category. Additionally, 2 convolutional neural network (CNN) models (ResNet50, DenseNet121) were tested in the same manner. RESULTS The ViT model outperformed both CNN models at the section level, achieving section-level accuracy of 82%, compared with 72% and 78% for ResNet and DenseNet121, respectively. ViT patient-level classification achieved accuracy of 93%, sensitivity of 0.90, positive predictive value of 0.90, specificity of 0.95, and negative predictive value of 0.95. Receiver operating characteristic area under the curve was greater for ViT than either CNN. CONCLUSIONS This classification approach using a ViT model and rules-based classification accurately detects the presence of cervical spinal cord compression at the patient level. In this study, the ViT model outperformed both conventional CNN approaches at the section and patient levels. If implemented into the clinical setting, such a tool may streamline neuroradiology workflow, improving efficiency and consistency.
Collapse
Affiliation(s)
- David L Payne
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
- Department of Biomedical Informatics (D.L.P., X.X., F.F., K.J., P.P.), Stony Brook University, Stony Brook, New York
| | - Xuan Xu
- Department of Biomedical Informatics (D.L.P., X.X., F.F., K.J., P.P.), Stony Brook University, Stony Brook, New York
| | - Farshid Faraji
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
- Department of Biomedical Informatics (D.L.P., X.X., F.F., K.J., P.P.), Stony Brook University, Stony Brook, New York
| | - Kevin John
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
- Department of Biomedical Informatics (D.L.P., X.X., F.F., K.J., P.P.), Stony Brook University, Stony Brook, New York
| | - Katherine Ferra Pradas
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
| | - Vahni Vishala Bernard
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
| | - Lev Bangiyev
- From the Department of Radiology (D.L.P., F.F., K.J., K.F.P., V.V.B., L.B.), Stony Brook University Hospital, Stony Brook, New York
| | - Prateek Prasanna
- Department of Biomedical Informatics (D.L.P., X.X., F.F., K.J., P.P.), Stony Brook University, Stony Brook, New York
| |
Collapse
|
41
|
Hanneman K, Playford D, Dey D, van Assen M, Mastrodicasa D, Cook TS, Gichoya JW, Williamson EE, Rubin GD. Value Creation Through Artificial Intelligence and Cardiovascular Imaging: A Scientific Statement From the American Heart Association. Circulation 2024; 149:e296-e311. [PMID: 38193315 DOI: 10.1161/cir.0000000000001202] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.
Collapse
|
42
|
Tang SM, Durieux JC, Faraji N, Mohamed I, Wien M, Nayate AP. "Are They Listening, and Do They Find It Useful?"-Evaluation of Mid-Rotation Formative Subjective and Objective Feedback to Radiology Trainees. Curr Probl Diagn Radiol 2024; 53:114-120. [PMID: 37690968 DOI: 10.1067/j.cpradiol.2023.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 08/23/2023] [Indexed: 09/12/2023]
Abstract
BACKGROUND Residents commonly receive only end-of-rotation evaluations and thus are often unaware of their progress during a rotation. In 2021, our neuroradiology section instituted mid-rotation feedback in which rotating residents received formative subjective and objective feedback. The purpose of this study was to describe our feedback method and to evaluate if residents found it helpful. METHODS Radiology residents rotate 3-4 times on the neuroradiology service for 1-month blocks. At the midpoint of the rotation (2 weeks), 7-10 neuroradiology attendings discussed the rotating residents' subjective performance. One attending was tasked with facilitating this discussion and taking notes. Objective metrics were obtained from our dictation software. Compiled feedback was relayed to residents via email. A 16-question anonymous survey was sent to 39 radiology residents (R1-R4) to evaluate the perceived value of mid-rotation feedback. Odds ratios and 95% confidence intervals were computed using logistic regression. RESULTS Sixty-nine percent (27/39) of residents responded to the survey; 92.6% (25/27) of residents reported receiving mid-rotation feedback in ≥50% of neuroradiology rotations; 92.3% (24/26) of residents found the subjective feedback helpful; 88.4% (23/26) of residents reported modifying their performance as suggested (100% R1-R2 vs 70% R3-R4; OR: 15.4 CI:1.26, >30.0);59.1% (13/22) of residents found the objective metrics helpful (75% R1-R2 vs 40% R3-R4; OR: 3.92 CI:0.74, 24.39) and 68.2% (15/22) stated they modified their performance based on these metrics (83.3% R1-R2 vs 50.0% R3-R4; OR:4.2 CI:0.73, 30.55); and 84.6% (22/26) of residents stated that mid-rotation subjective feedback and 45.5% (10/22) stated that mid-rotation objective feedback should be implemented in other sections. CONCLUSIONS Majority of residents found mid-rotation feedback to be helpful in informing them about their progress and areas for improvement in the neuroradiology rotation, more so for subjective feedback than objective feedback. The majority of residents stated all rotations should provide mid-rotation subjective feedback.
Collapse
Affiliation(s)
- Stephen M Tang
- Case Western Reserve University School of Medicine, Cleveland, OH
| | - Jared C Durieux
- University Hospitals Cleveland Medical Center, Cleveland, OH
| | - Navid Faraji
- University Hospitals Cleveland Medical Center, Cleveland, OH
| | - Inas Mohamed
- University Hospitals Cleveland Medical Center, Cleveland, OH
| | - Michael Wien
- University Hospitals Cleveland Medical Center, Cleveland, OH
| | - Ameya P Nayate
- University Hospitals Cleveland Medical Center, Cleveland, OH.
| |
Collapse
|
43
|
Hua D, Petrina N, Young N, Cho JG, Poon SK. Understanding the factors influencing acceptability of AI in medical imaging domains among healthcare professionals: A scoping review. Artif Intell Med 2024; 147:102698. [PMID: 38184343 DOI: 10.1016/j.artmed.2023.102698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 09/29/2023] [Accepted: 10/29/2023] [Indexed: 01/08/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has the potential to transform medical practice within the medical imaging industry and materially improve productivity and patient outcomes. However, low acceptability of AI as a digital healthcare intervention among medical professionals threatens to undermine user uptake levels, hinder meaningful and optimal value-added engagement, and ultimately prevent these promising benefits from being realised. Understanding the factors underpinning AI acceptability will be vital for medical institutions to pinpoint areas of deficiency and improvement within their AI implementation strategies. This scoping review aims to survey the literature to provide a comprehensive summary of the key factors influencing AI acceptability among healthcare professionals in medical imaging domains and the different approaches which have been taken to investigate them. METHODS A systematic literature search was performed across five academic databases including Medline, Cochrane Library, Web of Science, Compendex, and Scopus from January 2013 to September 2023. This was done in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. Overall, 31 articles were deemed appropriate for inclusion in the scoping review. RESULTS The literature has converged towards three overarching categories of factors underpinning AI acceptability including: user factors involving trust, system understanding, AI literacy, and technology receptiveness; system usage factors entailing value proposition, self-efficacy, burden, and workflow integration; and socio-organisational-cultural factors encompassing social influence, organisational readiness, ethicality, and perceived threat to professional identity. Yet, numerous studies have overlooked a meaningful subset of these factors that are integral to the use of medical AI systems such as the impact on clinical workflow practices, trust based on perceived risk and safety, and compatibility with the norms of medical professions. This is attributable to reliance on theoretical frameworks or ad-hoc approaches which do not explicitly account for healthcare-specific factors, the novelties of AI as software as a medical device (SaMD), and the nuances of human-AI interaction from the perspective of medical professionals rather than lay consumer or business end users. CONCLUSION This is the first scoping review to survey the health informatics literature around the key factors influencing the acceptability of AI as a digital healthcare intervention in medical imaging contexts. The factors identified in this review suggest that existing theoretical frameworks used to study AI acceptability need to be modified to better capture the nuances of AI deployment in healthcare contexts where the user is a healthcare professional influenced by expert knowledge and disciplinary norms. Increasing AI acceptability among medical professionals will critically require designing human-centred AI systems which go beyond high algorithmic performance to consider accessibility to users with varying degrees of AI literacy, clinical workflow practices, the institutional and deployment context, and the cultural, ethical, and safety norms of healthcare professions. As investment into AI for healthcare increases, it would be valuable to conduct a systematic review and meta-analysis of the causal contribution of these factors to achieving high levels of AI acceptability among medical professionals.
Collapse
Affiliation(s)
- David Hua
- School of Computer Science, The University of Sydney, Australia; Sydney Law School, The University of Sydney, Australia
| | - Neysa Petrina
- School of Computer Science, The University of Sydney, Australia
| | - Noel Young
- Sydney Medical School, The University of Sydney, Australia; Lumus Imaging, Australia
| | - Jin-Gun Cho
- Sydney Medical School, The University of Sydney, Australia; Western Sydney Local Health District, Australia; Lumus Imaging, Australia
| | - Simon K Poon
- School of Computer Science, The University of Sydney, Australia; Western Sydney Local Health District, Australia.
| |
Collapse
|
44
|
Burnazovic E, Yee A, Levy J, Gore G, Abbasgholizadeh Rahimi S. Application of Artificial intelligence in COVID-19-related geriatric care: A scoping review. Arch Gerontol Geriatr 2024; 116:105129. [PMID: 37542917 DOI: 10.1016/j.archger.2023.105129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 07/11/2023] [Accepted: 07/13/2023] [Indexed: 08/07/2023]
Abstract
BACKGROUND Older adults have been disproportionately affected by the COVID-19 pandemic. This scoping review aimed to summarize the current evidence of artificial intelligence (AI) use in the screening/monitoring, diagnosis, and/or treatment of COVID-19 among older adults. METHOD The review followed the Joanna Briggs Institute and Arksey and O'Malley frameworks. An information specialist performed a comprehensive search from the date of inception until May 2021, in six bibliographic databases. The selected studies considered all populations, and all AI interventions that had been used in COVID-19-related geriatric care. We focused on patient, healthcare provider, and healthcare system-related outcomes. The studies were restricted to peer-reviewed English publications. Two authors independently screened the titles and abstracts of the identified records, read the selected full texts, and extracted data from the included studies using a validated data extraction form. Disagreements were resolved by consensus, and if this was not possible, the opinion of a third reviewer was sought. RESULTS Six databases were searched , yielding 3,228 articles, of which 10 were included. The majority of articles used a single AI model to assess the association between patients' comorbidities and COVID-19 outcomes. Articles were mainly conducted in high-income countries, with limited representation of females in study participants, and insufficient reporting of participants' race and ethnicity. DISCUSSION This review highlighted how the COVID-19 pandemic has accelerated the application of AI to protect older populations, with most interventions in the pilot testing stage. Further work is required to measure effectiveness of these technologies in a larger scale, use more representative datasets for training of AI models, and expand AI applications to low-income countries.
Collapse
Affiliation(s)
- Emina Burnazovic
- Integrated Biomedical Engineering and Health Sciences, Department of Computing and Software, Faculty of Engineering, McMaster University, Hamilton, ON, Canada
| | - Amanda Yee
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Joshua Levy
- Department of Pharmacology and Therapeutics, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Genevieve Gore
- Schulich Library of Physical Sciences, Life Sciences and Engineering, McGill University, Montreal, QC, Canada
| | - Samira Abbasgholizadeh Rahimi
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, QC, Canada; Mila-Quebec Artificial Intelligence Institute, Montreal, QC, Canada; Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, QC, Canada.
| |
Collapse
|
45
|
Clancy PW, Tulenko K, Rizvi T. An Innovative Medical Student Neuroradiology Elective Course: Active Learning Through a Case-Based Approach. Acad Radiol 2024; 31:322-328. [PMID: 37973514 DOI: 10.1016/j.acra.2023.09.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 09/13/2023] [Accepted: 09/20/2023] [Indexed: 11/19/2023]
Abstract
RATIONALE AND OBJECTIVES Traditional radiology education in clerkships is focused on observational and passive learning from radiology faculty. The aim of this study was to validate a new case-based radiology course challenging medical students to independently scroll through picture archival and communications system cases, thereby actively learning and improving their understanding and education of radiology. MATERIALS AND METHODS This study used PowerPoint files to present and review various brain, spine, and head and neck clinical cases to simulate real time case review process by Radiologists. Students were tested with an online quiz based on the cases both before and after the review. Quizzes were distributed and responses collected at both times via a Google Form. Students had access to correct answers and feedback after the post-case quiz. A Radiologist was available for an hour of individualized committed teaching time to answer student questions after the post-case quiz. After the elective, there was an option for both quantitative and qualitative feedback. RESULTS All 54 students who took part in this independent case-based program indicated satisfaction and improvement in their understanding of Neuroradiology. Post-quiz classes demonstrated objective improvement in understanding. CONCLUSION This program represents a viable, supplementary approach to traditional radiology education that should be considered for future use and duplication at other institutions.
Collapse
Affiliation(s)
- Paul W Clancy
- School of Medicine, University of Virginia Health System, Charlottesville, Virginia, USA (P.W.C., K.T.).
| | - Kassandra Tulenko
- School of Medicine, University of Virginia Health System, Charlottesville, Virginia, USA (P.W.C., K.T.)
| | - Tanvir Rizvi
- Department of Radiology and Medical Imaging, University of Virginia Health System, Charlottesville, Virginia, USA (T.R.)
| |
Collapse
|
46
|
Chen Z, Yu Y, Liu S, Du W, Hu L, Wang C, Li J, Liu J, Zhang W, Peng X. A deep learning and radiomics fusion model based on contrast-enhanced computer tomography improves preoperative identification of cervical lymph node metastasis of oral squamous cell carcinoma. Clin Oral Investig 2023; 28:39. [PMID: 38151672 DOI: 10.1007/s00784-023-05423-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 11/21/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVES In this study, we constructed and validated models based on deep learning and radiomics to facilitate preoperative diagnosis of cervical lymph node metastasis (LNM) using contrast-enhanced computed tomography (CECT). MATERIALS AND METHODS CECT scans of 100 patients with OSCC (217 metastatic and 1973 non-metastatic cervical lymph nodes: development set, 76 patients; internally independent test set, 24 patients) who received treatment at the Peking University School and Hospital of Stomatology between 2012 and 2016 were retrospectively collected. Clinical diagnoses and pathological findings were used to establish the gold standard for metastatic cervical LNs. A reader study with two clinicians was also performed to evaluate the lymph node status in the test set. The performance of the proposed models and the clinicians was evaluated and compared by measuring using the area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). RESULTS A fusion model combining deep learning with radiomics showed the best performance (ACC, 89.2%; SEN, 92.0%; SPE, 88.9%; and AUC, 0.950 [95% confidence interval: 0.908-0.993, P < 0.001]) in the test set. In comparison with the clinicians, the fusion model showed higher sensitivity (92.0 vs. 72.0% and 60.0%) but lower specificity (88.9 vs. 97.5% and 98.8%). CONCLUSION A fusion model combining radiomics and deep learning approaches outperformed other single-technique models and showed great potential to accurately predict cervical LNM in patients with OSCC. CLINICAL RELEVANCE The fusion model can complement the preoperative identification of LNM of OSCC performed by the clinicians.
Collapse
Affiliation(s)
- Zhen Chen
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Yao Yu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Shuo Liu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Wen Du
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Leihao Hu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Congwei Wang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Jiaqi Li
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Jianbo Liu
- Huafang Hanying Medical Technology Co., Ltd, No.19, West Bridge Road, Miyun District, Beijing, 101520, People's Republic of China
| | - Wenbo Zhang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Xin Peng
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China.
| |
Collapse
|
47
|
Rašić M, Tropčić M, Karlović P, Gabrić D, Subašić M, Knežević P. Detection and Segmentation of Radiolucent Lesions in the Lower Jaw on Panoramic Radiographs Using Deep Neural Networks. MEDICINA (KAUNAS, LITHUANIA) 2023; 59:2138. [PMID: 38138241 PMCID: PMC10744511 DOI: 10.3390/medicina59122138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 11/29/2023] [Accepted: 12/07/2023] [Indexed: 12/24/2023]
Abstract
Background and Objectives: The purpose of this study was to develop and evaluate a deep learning model capable of autonomously detecting and segmenting radiolucent lesions in the lower jaw by utilizing You Only Look Once (YOLO) v8. Materials and Methods: This study involved the analysis of 226 lesions present in panoramic radiographs captured between 2013 and 2023 at the Clinical Hospital Dubrava and the School of Dental Medicine, University of Zagreb. Panoramic radiographs included radiolucent lesions such as radicular cysts, ameloblastomas, odontogenic keratocysts (OKC), dentigerous cysts and residual cysts. To enhance the database, we applied techniques such as translation, scaling, rotation, horizontal flipping and mosaic effects. We have employed the deep neural network to tackle our detection and segmentation objectives. Also, to improve our model's generalization capabilities, we conducted five-fold cross-validation. The assessment of the model's performance was carried out through metrics like Intersection over Union (IoU), precision, recall and mean average precision (mAP)@50 and mAP@50-95. Results: In the detection task, the precision, recall, mAP@50 and mAP@50-95 scores without augmentation were recorded at 91.8%, 57.1%, 75.8% and 47.3%, while, with augmentation, were 95.2%, 94.4%, 97.5% and 68.7%, respectively. Similarly, in the segmentation task, the precision, recall, mAP@50 and mAP@50-95 values achieved without augmentation were 76%, 75.5%, 75.1% and 48.3%, respectively. Augmentation techniques led to an improvement of these scores to 100%, 94.5%, 96.6% and 72.2%. Conclusions: Our study confirmed that the model developed using the advanced YOLOv8 has the remarkable capability to automatically detect and segment radiolucent lesions in the mandible. With its continual evolution and integration into various medical fields, the deep learning model holds the potential to revolutionize patient care.
Collapse
Affiliation(s)
- Mario Rašić
- Clinic for Tumors, Clinical Hospital Center “Sisters of Mercy”, Ilica 197, 10000 Zagreb, Croatia;
| | - Mario Tropčić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Unska Ulica 3, 10000 Zagreb, Croatia;
| | - Pjetra Karlović
- Department of Maxillofacial and Oral Surgery, Dubrava University Hospital, Avenija Gojka Šuška 6, 10000 Zagreb, Croatia;
| | - Dragana Gabrić
- Department of Oral Surgery, School of Dental Medicine, University of Zagreb, Gundulićeva 5, 10000 Zagreb, Croatia;
| | - Marko Subašić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Unska Ulica 3, 10000 Zagreb, Croatia;
| | - Predrag Knežević
- Department of Maxillofacial and Oral Surgery, Dubrava University Hospital, Avenija Gojka Šuška 6, 10000 Zagreb, Croatia;
| |
Collapse
|
48
|
Nicolaes J, Skjødt MK, Raeymaeckers S, Smith CD, Abrahamsen B, Fuerst T, Debois M, Vandermeulen D, Libanati C. Towards Improved Identification of Vertebral Fractures in Routine Computed Tomography (CT) Scans: Development and External Validation of a Machine Learning Algorithm. J Bone Miner Res 2023; 38:1856-1866. [PMID: 37747147 DOI: 10.1002/jbmr.4916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 09/06/2023] [Accepted: 09/17/2023] [Indexed: 09/26/2023]
Abstract
Vertebral fractures (VFs) are the hallmark of osteoporosis, being one of the most frequent types of fragility fracture and an early sign of the disease. They are associated with significant morbidity and mortality. VFs are incidentally found in one out of five imaging studies, however, more than half of the VFs are not identified nor reported in patient computed tomography (CT) scans. Our study aimed to develop a machine learning algorithm to identify VFs in abdominal/chest CT scans and evaluate its performance. We acquired two independent data sets of routine abdominal/chest CT scans of patients aged 50 years or older: a training set of 1011 scans from a non-interventional, prospective proof-of-concept study at the Universitair Ziekenhuis (UZ) Brussel and a validation set of 2000 subjects from an observational cohort study at the Hospital of Holbaek. Both data sets were externally reevaluated to identify reference standard VF readings using the Genant semiquantitative (SQ) grading. Four independent models have been trained in a cross-validation experiment using the training set and an ensemble of four models has been applied to the external validation set. The validation set contained 15.3% scans with one or more VF (SQ2-3), whereas 663 of 24,930 evaluable vertebrae (2.7%) were fractured (SQ2-3) as per reference standard readings. Comparison of the ensemble model with the reference standard readings in identifying subjects with one or more moderate or severe VF resulted in an area under the receiver operating characteristic curve (AUROC) of 0.88 (95% confidence interval [CI], 0.85-0.90), accuracy of 0.92 (95% CI, 0.91-0.93), kappa of 0.72 (95% CI, 0.67-0.76), sensitivity of 0.81 (95% CI, 0.76-0.85), and specificity of 0.95 (95% CI, 0.93-0.96). We demonstrated that a machine learning algorithm trained for VF detection achieved strong performance on an external validation set. It has the potential to support healthcare professionals with the early identification of VFs and prevention of future fragility fractures. © 2023 UCB S.A. and The Authors. Journal of Bone and Mineral Research published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research (ASBMR).
Collapse
Affiliation(s)
- Joeri Nicolaes
- Department of Electrical Engineering (ESAT), Center for Processing Speech and Images, KU Leuven, Leuven, Belgium
- UCB Pharma, Brussels, Belgium
| | - Michael Kriegbaum Skjødt
- Department of Medicine, Hospital of Holbaek, Holbaek, Denmark
- OPEN-Open Patient Data Explorative Network, Department of Clinical Research, University of Southern Denmark and Odense University Hospital, Odense, Denmark
| | | | - Christopher Dyer Smith
- OPEN-Open Patient Data Explorative Network, Department of Clinical Research, University of Southern Denmark and Odense University Hospital, Odense, Denmark
| | - Bo Abrahamsen
- Department of Medicine, Hospital of Holbaek, Holbaek, Denmark
- OPEN-Open Patient Data Explorative Network, Department of Clinical Research, University of Southern Denmark and Odense University Hospital, Odense, Denmark
- NDORMS, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Oxford University Hospitals, Oxford, UK
| | | | | | - Dirk Vandermeulen
- Department of Electrical Engineering (ESAT), Center for Processing Speech and Images, KU Leuven, Leuven, Belgium
| | | |
Collapse
|
49
|
Kwee TC, Yakar D, Sluijter TE, Pennings JP, Roest C. Can we revolutionize diagnostic imaging by keeping Pandora's box closed? Br J Radiol 2023; 96:20230505. [PMID: 37906185 PMCID: PMC10646642 DOI: 10.1259/bjr.20230505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 08/15/2023] [Accepted: 09/09/2023] [Indexed: 11/02/2023] Open
Abstract
Incidental imaging findings are a considerable health problem, because they generally result in low-value and potentially harmful care. Healthcare professionals struggle how to deal with them, because once detected they can usually not be ignored. In this opinion article, we first reflect on current practice, and then propose and discuss a new potential strategy to pre-emptively tackle incidental findings. The core principle of this concept is to keep the proverbial Pandora's box closed, i.e. to not visualize incidental findings, which can be achieved using deep learning algorithms. This concept may have profound implications for diagnostic imaging.
Collapse
Affiliation(s)
- Thomas C Kwee
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Derya Yakar
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Tim E Sluijter
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Jan P Pennings
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Christian Roest
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| |
Collapse
|
50
|
Kiefer J, Kopp M, Ruettinger T, Heiss R, Wuest W, Amarteifio P, Stroebel A, Uder M, May MS. Diagnostic Accuracy and Performance Analysis of a Scanner-Integrated Artificial Intelligence Model for the Detection of Intracranial Hemorrhages in a Traumatology Emergency Department. Bioengineering (Basel) 2023; 10:1362. [PMID: 38135956 PMCID: PMC10740704 DOI: 10.3390/bioengineering10121362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 11/03/2023] [Accepted: 11/19/2023] [Indexed: 12/24/2023] Open
Abstract
Intracranial hemorrhages require an immediate diagnosis to optimize patient management and outcomes, and CT is the modality of choice in the emergency setting. We aimed to evaluate the performance of the first scanner-integrated artificial intelligence algorithm to detect brain hemorrhages in a routine clinical setting. This retrospective study includes 435 consecutive non-contrast head CT scans. Automatic brain hemorrhage detection was calculated as a separate reconstruction job in all cases. The radiological report (RR) was always conducted by a radiology resident and finalized by a senior radiologist. Additionally, a team of two radiologists reviewed the datasets retrospectively, taking additional information like the clinical record, course, and final diagnosis into account. This consensus reading served as a reference. Statistics were carried out for diagnostic accuracy. Brain hemorrhage detection was executed successfully in 432/435 (99%) of patient cases. The AI algorithm and reference standard were consistent in 392 (90.7%) cases. One false-negative case was identified within the 52 positive cases. However, 39 positive detections turned out to be false positives. The diagnostic performance was calculated as a sensitivity of 98.1%, specificity of 89.7%, positive predictive value of 56.7%, and negative predictive value (NPV) of 99.7%. The execution of scanner-integrated AI detection of brain hemorrhages is feasible and robust. The diagnostic accuracy has a high specificity and a very high negative predictive value and sensitivity. However, many false-positive findings resulted in a relatively moderate positive predictive value.
Collapse
Affiliation(s)
- Jonas Kiefer
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
| | - Markus Kopp
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| | - Theresa Ruettinger
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
| | - Rafael Heiss
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| | - Wolfgang Wuest
- Martha-Maria Hospital Nuernberg, Stadenstraße 58, 90491 Nuernberg, Germany;
| | - Patrick Amarteifio
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
- Siemens Healthcare GmbH, Allee am Röthelheimpark 3, 91052 Erlangen, Germany
| | - Armin Stroebel
- Center for Clinical Studies CCS, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Krankenhausstraße 12, 91054 Erlangen, Germany;
| | - Michael Uder
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| | - Matthias Stefan May
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| |
Collapse
|