1
|
Hanneman K, Playford D, Dey D, van Assen M, Mastrodicasa D, Cook TS, Gichoya JW, Williamson EE, Rubin GD. Value Creation Through Artificial Intelligence and Cardiovascular Imaging: A Scientific Statement From the American Heart Association. Circulation 2024; 149:e296-e311. [PMID: 38193315 DOI: 10.1161/cir.0000000000001202] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.
Collapse
|
2
|
Cloran FJ. Artificial Intelligence in Military Radiology-Clearing the Starting Gate: A Military Radiologist's Perspective. J Am Coll Radiol 2023; 20:857-858. [PMID: 37453597 DOI: 10.1016/j.jacr.2023.06.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 06/14/2023] [Indexed: 07/18/2023]
Affiliation(s)
- Francis J Cloran
- Assistant Proferssor, Uniformed Services University of the Health Sciences, San Antonio Uniformed Services Health Education Consortium, Fort Sam Houston, Texas.
| |
Collapse
|
3
|
Li J, Lin Y, Zhao P, Liu W, Cai L, Sun J, Zhao L, Yang Z, Song H, Lv H, Wang Z. Automatic text classification of actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer (BERT) and in-domain pre-training (IDPT). BMC Med Inform Decis Mak 2022; 22:200. [PMID: 35907966 PMCID: PMC9338483 DOI: 10.1186/s12911-022-01946-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022] Open
Abstract
Background Given the increasing number of people suffering from tinnitus, the accurate categorization of patients with actionable reports is attractive in assisting clinical decision making. However, this process requires experienced physicians and significant human labor. Natural language processing (NLP) has shown great potential in big data analytics of medical texts; yet, its application to domain-specific analysis of radiology reports is limited. Objective The aim of this study is to propose a novel approach in classifying actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer BERT-based models and evaluate the benefits of in domain pre-training (IDPT) along with a sequence adaptation strategy. Methods A total of 5864 temporal bone computed tomography(CT) reports are labeled by two experienced radiologists as follows: (1) normal findings without notable lesions; (2) notable lesions but uncorrelated to tinnitus; and (3) at least one lesion considered as potential cause of tinnitus. We then constructed a framework consisting of deep learning (DL) neural networks and self-supervised BERT models. A tinnitus domain-specific corpus is used to pre-train the BERT model to further improve its embedding weights. In addition, we conducted an experiment to evaluate multiple groups of max sequence length settings in BERT to reduce the excessive quantity of calculations. After a comprehensive comparison of all metrics, we determined the most promising approach through the performance comparison of F1-scores and AUC values. Results In the first experiment, the BERT finetune model achieved a more promising result (AUC-0.868, F1-0.760) compared with that of the Word2Vec-based models(AUC-0.767, F1-0.733) on validation data. In the second experiment, the BERT in-domain pre-training model (AUC-0.948, F1-0.841) performed significantly better than the BERT based model(AUC-0.868, F1-0.760). Additionally, in the variants of BERT fine-tuning models, Mengzi achieved the highest AUC of 0.878 (F1-0.764). Finally, we found that the BERT max-sequence-length of 128 tokens achieved an AUC of 0.866 (F1-0.736), which is almost equal to the BERT max-sequence-length of 512 tokens (AUC-0.868,F1-0.760). Conclusion In conclusion, we developed a reliable BERT-based framework for tinnitus diagnosis from Chinese radiology reports, along with a sequence adaptation strategy to reduce computational resources while maintaining accuracy. The findings could provide a reference for NLP development in Chinese radiology reports. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-022-01946-y.
Collapse
Affiliation(s)
- Jia Li
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, No.5 Zhongguancun East Road, Beijing, 100050, People's Republic of China
| | - Pengfei Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Wenjuan Liu
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Linkun Cai
- School of Biological Science and Medical Engineering, Beihang University, No.37 XueYuan Road, Beijing, 100191, People's Republic of China
| | - Jing Sun
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Lei Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, No. 5, South Street, Zhongguancun, Haidian District, Beijing, 100050, People's Republic of China.
| | - Han Lv
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China.
| | - Zhenchang Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China. .,School of Biological Science and Medical Engineering, Beihang University, No.37 XueYuan Road, Beijing, 100191, People's Republic of China.
| |
Collapse
|
4
|
Zhang D, Neely B, Lo JY, Patel BN, Hyslop T, Gupta RT. Utility of a Rule-Based Algorithm in the Assessment of Standardized Reporting in PI-RADS. Acad Radiol 2022; 30:1141-1147. [PMID: 35909050 DOI: 10.1016/j.acra.2022.06.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 06/15/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022]
Abstract
RATIONALE AND OBJECTIVES Adoption of the Prostate Imaging Reporting & Data System (PI-RADS) has been shown to increase detection of clinically significant prostate cancer on prostate mpMRI. We propose that a rule-based algorithm based on Regular Expression (RegEx) matching can be used to automatically categorize prostate mpMRI reports into categories as a means by which to assess for opportunities for quality improvement. MATERIALS AND METHODS All prostate mpMRIs performed in the Duke University Health System from January 2, 2015, to January 29, 2021, were analyzed. Exclusion criteria were applied, for a total of 5343 male patients and 6264 prostate mpMRI reports. These reports were then analyzed by our RegEx algorithm to be categorized as PI-RADS 1 through PI-RADS 5, Recurrent Disease, or "No Information Available." A stratified, random sample of 502 mpMRI reports was reviewed by a blinded clinical team to assess performance of the RegEx algorithm. RESULTS Compared to manual review, the RegEx algorithm achieved overall accuracy of 92.6%, average precision of 88.8%, average recall of 85.6%, and F1 score of 0.871. The clinical team also reviewed 344 cases that were classified as "No Information Available," and found that in 150 instances, no numerical PI-RADS score for any lesion was included in the impression section of the mpMRI report. CONCLUSION Rule-based processing is an accurate method for the large-scale, automated extraction of PI-RADS scores from the text of radiology reports. These natural language processing approaches can be used for future initiatives in quality improvement in prostate mpMRI reporting with PI-RADS.
Collapse
|
5
|
Donnelly LF, Grzeszczuk R, Guimaraes CV. Use of Natural Language Processing (NLP) in Evaluation of Radiology Reports: An Update on Applications and Technology Advances. Semin Ultrasound CT MR 2022; 43:176-181. [PMID: 35339258 DOI: 10.1053/j.sult.2022.02.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Natural language processing (NLP) is focused on the computer interpretation of human language and can be used to evaluate radiology reports and has demonstrated useful applications in essentially all aspects of medical imaging delivery: interpretation of imaging data, improving image acquisition, image analysis, and increasing efficiency of imaging services. This manuscript reviews general technologic approaches to NLP at a level hopefully understandable by clinical radiologists, discusses recent advancements in NLP techniques, and discusses current and potential applications of NLP in radiology.
Collapse
Affiliation(s)
- Lane F Donnelly
- University of North Carolina, School of Medicine, Department of Radiology, Chapel Hill, NC; Stanford University, School of Medicine, Department of Radiology, Palo Alto, CA; Stanford University, School of Medicine, Department of Pediatrics, Palo Alto, CA.
| | | | - Carolina V Guimaraes
- University of North Carolina, School of Medicine, Department of Radiology, Chapel Hill, NC; Stanford University, School of Medicine, Department of Radiology, Palo Alto, CA
| |
Collapse
|
6
|
Artificial Intelligence Application in Assessment of Panoramic Radiographs. Diagnostics (Basel) 2022; 12:diagnostics12010224. [PMID: 35054390 PMCID: PMC8774336 DOI: 10.3390/diagnostics12010224] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 01/12/2022] [Accepted: 01/14/2022] [Indexed: 12/04/2022] Open
Abstract
The aim of this study was to assess the reliability of the artificial intelligence (AI) automatic evaluation of panoramic radiographs (PRs). Thirty PRs, covering at least six teeth with the possibility of assessing the marginal and apical periodontium, were uploaded to the Diagnocat (LLC Diagnocat, Moscow, Russia) account, and the radiologic report of each was generated as the basis of automatic evaluation. The same PRs were manually evaluated by three independent evaluators with 12, 15, and 28 years of experience in dentistry, respectively. The data were collected in such a way as to allow statistical analysis with SPSS Statistics software (IBM, Armonk, NY, USA). A total of 90 reports were created for 30 PRs. The AI protocol showed very high specificity (above 0.9) in all assessments compared to ground truth except from periodontal bone loss. Statistical analysis showed a high interclass correlation coefficient (ICC > 0.75) for all interevaluator assessments, proving the good credibility of the ground truth and the reproducibility of the reports. Unacceptable reliability was obtained for caries assessment (ICC = 0.681) and periapical lesions assessment (ICC = 0.619). The tested AI system can be helpful as an initial evaluation of screening PRs, giving appropriate credibility reports and suggesting additional diagnostic methods for more accurate evaluation if needed.
Collapse
|