2
|
Kim H, Kim K, Oh SJ, Lee S, Woo JH, Kim JH, Cha YK, Kim K, Chung MJ. AI-assisted Analysis to Facilitate Detection of Humeral Lesions on Chest Radiographs. Radiol Artif Intell 2024; 6:e230094. [PMID: 38446041 PMCID: PMC11140509 DOI: 10.1148/ryai.230094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 01/10/2024] [Accepted: 02/15/2024] [Indexed: 03/07/2024]
Abstract
Purpose To develop an artificial intelligence (AI) system for humeral tumor detection on chest radiographs (CRs) and evaluate the impact on reader performance. Materials and Methods In this retrospective study, 14 709 CRs (January 2000 to December 2021) were collected from 13 468 patients, including CT-proven normal (n = 13 116) and humeral tumor (n = 1593) cases. The data were divided into training and test groups. A novel training method called false-positive activation area reduction (FPAR) was introduced to enhance the diagnostic performance by focusing on the humeral region. The AI program and 10 radiologists were assessed using holdout test set 1, wherein the radiologists were tested twice (with and without AI test results). The performance of the AI system was evaluated using holdout test set 2, comprising 10 497 normal images. Receiver operating characteristic analyses were conducted for evaluating model performance. Results FPAR application in the AI program improved its performance compared with a conventional model based on the area under the receiver operating characteristic curve (0.87 vs 0.82, P = .04). The proposed AI system also demonstrated improved tumor localization accuracy (80% vs 57%, P < .001). In holdout test set 2, the proposed AI system exhibited a false-positive rate of 2%. AI assistance improved the radiologists' sensitivity, specificity, and accuracy by 8.9%, 1.2%, and 3.5%, respectively (P < .05 for all). Conclusion The proposed AI tool incorporating FPAR improved humeral tumor detection on CRs and reduced false-positive results in tumor visualization. It may serve as a supportive diagnostic tool to alert radiologists about humeral abnormalities. Keywords: Artificial Intelligence, Conventional Radiography, Humerus, Machine Learning, Shoulder, Tumor Supplemental material is available for this article. © RSNA, 2024.
Collapse
Affiliation(s)
- Harim Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Kyungsu Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Seong Je Oh
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Sungjoo Lee
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Jung Han Woo
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Jong Hee Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Yoon Ki Cha
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Kyunga Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Myung Jin Chung
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| |
Collapse
|
3
|
Neves J, Hsieh C, Nobre IB, Sousa SC, Ouyang C, Maciel A, Duchowski A, Jorge J, Moreira C. Shedding light on ai in radiology: A systematic review and taxonomy of eye gaze-driven interpretability in deep learning. Eur J Radiol 2024; 172:111341. [PMID: 38340426 DOI: 10.1016/j.ejrad.2024.111341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/04/2024] [Accepted: 01/25/2024] [Indexed: 02/12/2024]
Abstract
X-ray imaging plays a crucial role in diagnostic medicine. Yet, a significant portion of the global population lacks access to this essential technology due to a shortage of trained radiologists. Eye-tracking data and deep learning models can enhance X-ray analysis by mapping expert focus areas, guiding automated anomaly detection, optimizing workflow efficiency, and bolstering training methods for novice radiologists. However, the literature shows contradictory results regarding the usefulness of eye-tracking data in deep-learning architectures for abnormality detection. We argue that these discrepancies between studies in the literature are due to (a) the way eye-tracking data is (or is not) processed, (b) the types of deep learning architectures chosen, and (c) the type of application that these architectures will have. We conducted a systematic literature review using PRISMA to address these contradicting results. We analyzed 60 studies that incorporated eye-tracking data in a deep-learning approach for different application goals in radiology. We performed a comparative analysis to understand if eye gaze data contains feature maps that can be useful under a deep learning approach and whether they can promote more interpretable predictions. To the best of our knowledge, this is the first survey in the area that performs a thorough investigation of eye gaze data processing techniques and their impacts in different deep learning architectures for applications such as error detection, classification, object detection, expertise level analysis, fatigue estimation and human attention prediction in medical imaging data. Our analysis resulted in two main contributions: (1) taxonomy that first divides the literature by task, enabling us to analyze the value eye movement can bring for each case and build guidelines regarding architectures and gaze processing techniques adequate for each application, and (2) an overall analysis of how eye gaze data can promote explainability in radiology.
Collapse
Affiliation(s)
- José Neves
- Instituto Superior Técnico / INESC-ID, University of Lisbon, Portugal.
| | - Chihcheng Hsieh
- School of Information Systems, Queensland University of Technology, Australia.
| | | | | | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Australia.
| | - Anderson Maciel
- Instituto Superior Técnico / INESC-ID, University of Lisbon, Portugal.
| | | | - Joaquim Jorge
- Instituto Superior Técnico / INESC-ID, University of Lisbon, Portugal.
| | - Catarina Moreira
- Human Technology Institute, University of Technology Sydney, Australia.
| |
Collapse
|