1
|
Kwak SH, Kim KY, Choi JS, Kim MC, Seol CH, Kim SR, Lee EH. Impact of AI-assisted CXR analysis in detecting incidental lung nodules and lung cancers in non-respiratory outpatient clinics. Front Med (Lausanne) 2024; 11:1449537. [PMID: 39170040 PMCID: PMC11335519 DOI: 10.3389/fmed.2024.1449537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Accepted: 07/29/2024] [Indexed: 08/23/2024] Open
Abstract
Purpose The use of artificial intelligence (AI) for chest X-ray (CXR) analysis is becoming increasingly prevalent in medical environments. This study aimed to determine whether AI in CXR can unexpectedly detect lung nodule detection and influence patient diagnosis and management in non-respiratory outpatient clinics. Methods In this retrospective study, patients over 18 years of age, who underwent CXR at Yongin Severance Hospital outpatient clinics between March 2021 and January 2023 and were identified to have lung nodules through AI software, were included. Commercially available AI-based lesion detection software (Lunit INSIGHT CXR) was used to detect lung nodules. Results Out Of 56,802 radiographic procedures, 40,191 were from non-respiratory departments, with AI detecting lung nodules in 1,754 cases (4.4%). Excluding 139 patients with known lung lesions, 1,615 patients were included in the final analysis. Out of these, 30.7% (495/1,615) underwent respiratory consultation and 31.7% underwent chest CT scans (512/1,615). As a result of the CT scans, 71.5% (366 cases) were found to have true nodules. Among these, the final diagnoses included 36 lung cancers (7.0%, 36/512), 141 lung nodules requiring follow-up (27.5%, 141/512), 114 active pulmonary infections (22.3%, 114/512), and 75 old inflammatory sequelae (14.6%, 75/512). The mean AI nodule score for lung cancer was significantly higher than that for other nodules (56.72 vs. 33.44, p < 0.001). Additionally, active pulmonary infection had a higher consolidation score, and old inflammatory sequelae had the highest fibrosis score, demonstrating differences in the AI analysis among the final diagnosis groups. Conclusion This study indicates that AI-detected incidental nodule abnormalities on CXR in non-respiratory outpatient clinics result in a substantial number of clinically significant diagnoses, emphasizing AI's role in detecting lung nodules and need for further evaluation and specialist consultation for proper diagnosis and management.
Collapse
Affiliation(s)
- Se Hyun Kwak
- Division of Pulmonology, Allergy and Critical Care Medicine, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Republic of Korea
| | - Kyeong Yeon Kim
- Division of Pulmonology, Allergy and Critical Care Medicine, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Republic of Korea
| | - Ji Soo Choi
- Division of Pulmonology, Allergy and Critical Care Medicine, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Republic of Korea
| | - Min Chul Kim
- Division of Pulmonology, Allergy and Critical Care Medicine, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Republic of Korea
| | - Chang Hwan Seol
- Division of Pulmonology, Allergy and Critical Care Medicine, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Republic of Korea
| | - Sung Ryeol Kim
- Division of Pulmonology, Allergy and Critical Care Medicine, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Republic of Korea
| | - Eun Hye Lee
- Division of Pulmonology, Allergy and Critical Care Medicine, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Republic of Korea
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
| |
Collapse
|
2
|
Agrawal R, Mishra S, Strange CD, Ahuja J, Shroff GS, Wu CC, Truong MT. The Role of Chest Radiography in Lung Cancer. Semin Ultrasound CT MR 2024:S0887-2171(24)00047-7. [PMID: 39067623 DOI: 10.1053/j.sult.2024.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Chest radiography is one of the most commonly performed imaging tests, and benefits include accessibility, speed, cost, and relatively low radiation exposure. Lung cancer is the third most common cancer in the United States and is responsible for the most cancer deaths. Knowledge of the role of chest radiography in assessing patients with lung cancer is important. This article discusses radiographic manifestations of lung cancer, the utility of chest radiography in lung cancer management, as well as the limitations of chest radiography and when computed tomography (CT) is indicated.
Collapse
Affiliation(s)
- Rishi Agrawal
- Department of Thoracic Imaging, University of Texas MD Anderson Cancer Center, Houston, TX.
| | - Shubendu Mishra
- Department of Radiation Oncology, University of Minnesota, Minneapolis, MN
| | - Chad D Strange
- Department of Thoracic Imaging, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Jitesh Ahuja
- Department of Thoracic Imaging, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Girish S Shroff
- Department of Thoracic Imaging, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Carol C Wu
- Department of Thoracic Imaging, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Mylene T Truong
- Department of Thoracic Imaging, University of Texas MD Anderson Cancer Center, Houston, TX
| |
Collapse
|
3
|
Selvakumar K, Lokesh S. Deep-KEDI: Deep learning-based zigzag generative adversarial network for encryption and decryption of medical images. Technol Health Care 2024:THC231927. [PMID: 38968065 DOI: 10.3233/thc-231927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/07/2024]
Abstract
BACKGROUND Medical imaging techniques have improved to the point where security has become a basic requirement for all applications to ensure data security and data transmission over the internet. However, clinical images hold personal and sensitive data related to the patients and their disclosure has a negative impact on their right to privacy as well as legal ramifications for hospitals. OBJECTIVE In this research, a novel deep learning-based key generation network (Deep-KEDI) is designed to produce the secure key used for decrypting and encrypting medical images. METHODS Initially, medical images are pre-processed by adding the speckle noise using discrete ripplet transform before encryption and are removed after decryption for more security. In the Deep-KEDI model, the zigzag generative adversarial network (ZZ-GAN) is used as the learning network to generate the secret key. RESULTS The proposed ZZ-GAN is used for secure encryption by generating three different zigzag patterns (vertical, horizontal, diagonal) of encrypted images with its key. The zigzag cipher uses an XOR operation in both encryption and decryption using the proposed ZZ-GAN. Encrypting the original image requires a secret key generated during encryption. After identification, the encrypted image is decrypted using the generated key to reverse the encryption process. Finally, speckle noise is removed from the encrypted image in order to reconstruct the original image. CONCLUSION According to the experiments, the Deep-KEDI model generates secret keys with an information entropy of 7.45 that is particularly suitable for securing medical images.
Collapse
Affiliation(s)
- K Selvakumar
- Department of Science and Humanities, Anna University, Chennai, India
- University College of Engineering, Nagercoil, India
| | - S Lokesh
- Department of Computer Science and Engineering, PSG Institute of Technology and Applied Research, Coimbatore, India
| |
Collapse
|
4
|
Yoo H, Yoo RE, Choi SH, Hwang I, Lee JY, Seo JY, Koh SY, Choi KS, Kang KM, Yun TJ. Deep learning-based reconstruction for acceleration of lumbar spine MRI: a prospective comparison with standard MRI. Eur Radiol 2023; 33:8656-8668. [PMID: 37498386 DOI: 10.1007/s00330-023-09918-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/28/2023] [Accepted: 05/31/2023] [Indexed: 07/28/2023]
Abstract
OBJECTIVE To compare the image quality and diagnostic performance between standard turbo spin-echo MRI and accelerated MRI with deep learning (DL)-based image reconstruction for degenerative lumbar spine diseases. MATERIALS AND METHODS Fifty patients who underwent both the standard and accelerated lumbar MRIs at a 1.5-T scanner for degenerative lumbar spine diseases were prospectively enrolled. DL reconstruction algorithm generated coarse (DL_coarse) and fine (DL_fine) images from the accelerated protocol. Image quality was quantitatively assessed in terms of signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) and qualitatively assessed using five-point visual scoring systems. The sensitivity and specificity of four radiologists for the diagnosis of degenerative diseases in both protocols were compared. RESULTS The accelerated protocol reduced the average MRI acquisition time by 32.3% as compared to the standard protocol. As compared with standard images, DL_coarse and DL_fine showed significantly higher SNRs on T1-weighted images (T1WI; both p < .001) and T2-weighted images (T2WI; p = .002 and p < 0.001), higher CNRs on T1WI (both p < 0.001), and similar CNRs on T2WI (p = .49 and p = .27). The average radiologist assessment of overall image quality for DL_coarse and DL_fine was higher on sagittal T1WI (p = .04 and p < .001) and axial T2WI (p = .006 and p = .01) and similar on sagittal T2WI (p = .90 and p = .91). Both DL_coarse and DL_fine had better image quality of cauda equina and paraspinal muscles on axial T2WI (both p = .04 for cauda equina; p = .008 and p = .002 for paraspinal muscles). Differences in sensitivity and specificity for the detection of central canal stenosis and neural foraminal stenosis between standard and DL-reconstructed images were all statistically nonsignificant (p ≥ 0.05). CONCLUSION DL-based protocol reduced MRI acquisition time without degrading image quality and diagnostic performance of readers for degenerative lumbar spine diseases. CLINICAL RELEVANCE STATEMENT The deep learning (DL)-based reconstruction algorithm may be used to further accelerate spine MRI imaging to reduce patient discomfort and increase the cost efficiency of spine MRI imaging. KEY POINTS • By using deep learning (DL)-based reconstruction algorithm in combination with the accelerated MRI protocol, the average acquisition time was reduced by 32.3% as compared with the standard protocol. • DL-reconstructed images had similar or better quantitative/qualitative overall image quality and similar or better image quality for the delineation of most individual anatomical structures. • The average radiologist's sensitivity and specificity for the detection of major degenerative lumbar spine diseases, including central canal stenosis, neural foraminal stenosis, and disc herniation, on standard and DL-reconstructed images, were similar.
Collapse
Affiliation(s)
- Hyunsuk Yoo
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101, Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Roh-Eul Yoo
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101, Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea.
| | - Seung Hong Choi
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101, Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea
- Center for Nanoparticle Research, Institute for Basic Science (IBS), Seoul, Republic of Korea
- School of Chemical and Biological Engineering, Seoul National University, Seoul, Republic of Korea
| | - Inpyeong Hwang
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101, Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Ji Ye Lee
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101, Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea
| | - June Young Seo
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101, Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Seok Young Koh
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101, Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Kyu Sung Choi
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101, Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Koung Mi Kang
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101, Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Tae Jin Yun
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101, Daehangno, Jongno-gu, Seoul, 03080, Republic of Korea
| |
Collapse
|
5
|
Dasegowda G, Bizzo BC, Gupta RV, Kaviani P, Ebrahimian S, Ricciardelli D, Abedi-Tari F, Neumark N, Digumarthy SR, Kalra MK, Dreyer KJ. Radiologist-Trained AI Model for Identifying Suboptimal Chest-Radiographs. Acad Radiol 2023; 30:2921-2930. [PMID: 37019698 DOI: 10.1016/j.acra.2023.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 02/28/2023] [Accepted: 03/06/2023] [Indexed: 04/05/2023]
Abstract
RATIONALE AND OBJECTIVES Suboptimal chest radiographs (CXR) can limit interpretation of critical findings. Radiologist-trained AI models were evaluated for differentiating suboptimal(sCXR) and optimal(oCXR) chest radiographs. MATERIALS AND METHODS Our IRB-approved study included 3278 CXRs from adult patients (mean age 55 ± 20 years) identified from a retrospective search of CXR in radiology reports from 5 sites. A chest radiologist reviewed all CXRs for the cause of suboptimality. The de-identified CXRs were uploaded into an AI server application for training and testing 5 AI models. The training set consisted of 2202 CXRs (n = 807 oCXR; n = 1395 sCXR) while 1076 CXRs (n = 729 sCXR; n = 347 oCXR) were used for testing. Data were analyzed with the Area under the curve (AUC) for the model's ability to classify oCXR and sCXR correctly. RESULTS For the two-class classification into sCXR or oCXR from all sites, for CXR with missing anatomy, AI had sensitivity, specificity, accuracy, and AUC of 78%, 95%, 91%, 0.87(95% CI 0.82-0.92), respectively. AI identified obscured thoracic anatomy with 91% sensitivity, 97% specificity, 95% accuracy, and 0.94 AUC (95% CI 0.90-0.97). Inadequate exposure with 90% sensitivity, 93% specificity, 92% accuracy, and AUC of 0.91 (95% CI 0.88-0.95). The presence of low lung volume was identified with 96% sensitivity, 92% specificity, 93% accuracy, and 0.94 AUC (95% CI 0.92-0.96). The sensitivity, specificity, accuracy, and AUC of AI in identifying patient rotation were 92%, 96%, 95%, and 0.94 (95% CI 0.91-0.98), respectively. CONCLUSION The radiologist-trained AI models can accurately classify optimal and suboptimal CXRs. Such AI models at the front end of radiographic equipment can enable radiographers to repeat sCXRs when necessary.
Collapse
Affiliation(s)
- Giridhar Dasegowda
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Bernardo C Bizzo
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Reya V Gupta
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114
| | - Parisa Kaviani
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Shadi Ebrahimian
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Debra Ricciardelli
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114
| | - Faezeh Abedi-Tari
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114
| | - Nir Neumark
- Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Subba R Digumarthy
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114.
| | - Keith J Dreyer
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| |
Collapse
|
6
|
Higuchi M, Nagata T, Iwabuchi K, Sano A, Maekawa H, Idaka T, Yamasaki M, Seko C, Sato A, Suzuki J, Anzai Y, Yabuki T, Saito T, Suzuki H. Development of a novel artificial intelligence algorithm to detect pulmonary nodules on chest radiography. Fukushima J Med Sci 2023; 69:177-183. [PMID: 37853640 PMCID: PMC10694515 DOI: 10.5387/fms.2023-14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 09/15/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND In this study, we aimed to develop a novel artificial intelligence (AI) algorithm to support pulmonary nodule detection, which will enable physicians to efficiently interpret chest radiographs for lung cancer diagnosis. METHODS We analyzed chest X-ray images obtained from a health examination center in Fukushima and the National Institutes of Health (NIH) Chest X-ray 14 dataset. We categorized these data into two types: type A included both Fukushima and NIH datasets, and type B included only the Fukushima dataset. We also demonstrated pulmonary nodules in the form of a heatmap display on each chest radiograph and calculated the positive probability score as an index value. RESULTS Our novel AI algorithms had a receiver operating characteristic (ROC) area under the curve (AUC) of 0.74, a sensitivity of 0.75, and a specificity of 0.60 for the type A dataset. For the type B dataset, the respective values were 0.79, 0.72, and 0.74. The algorithms in both the type A and B datasets were superior to the accuracy of radiologists and similar to previous studies. CONCLUSIONS The proprietary AI algorithms had a similar accuracy for interpreting chest radiographs when compared with previous studies and radiologists. Especially, we could train a high quality AI algorithm, even with our small type B data set. However, further studies are needed to improve and further validate the accuracy of our AI algorithm.
Collapse
Affiliation(s)
- Mitsunori Higuchi
- Department of Thoracic Surgery, Aizu Medical Center, Fukushima Medical University
| | - Takeshi Nagata
- University of Tsukuba School of Integrative and Global Majors
- Mizuho Research and Technologies, Ltd.
| | | | | | | | | | | | | | - Atsushi Sato
- Fukushima Preservative Service Association of Health
| | - Junzo Suzuki
- Fukushima Preservative Service Association of Health
| | | | | | - Takuro Saito
- Department of Surgery, Aizu Medical Center, Fukushima Medical University
| | - Hiroyuki Suzuki
- Department of Chest Surgery, Fukushima Medical University School of Medicine
| |
Collapse
|
7
|
Hwang SH, Shin HJ, Kim EK, Lee EH, Lee M. Clinical outcomes and actual consequence of lung nodules incidentally detected on chest radiographs by artificial intelligence. Sci Rep 2023; 13:19732. [PMID: 37957283 PMCID: PMC10643548 DOI: 10.1038/s41598-023-47194-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 11/10/2023] [Indexed: 11/15/2023] Open
Abstract
This study evaluated how often clinically significant lung nodules were detected unexpectedly on chest radiographs (CXR) by artificial intelligence (AI)-based detection software, and whether co-existing findings can aid in differential diagnosis of lung nodules. Patients (> 18 years old) with AI-detected lung nodules at their first visit from March 2021 to February 2022, except for those in the pulmonology or thoracic surgery departments, were retrospectively included. Three radiologists categorized nodules into malignancy, active inflammation, post-inflammatory sequelae, or "other" groups. Characteristics of the nodule and abnormality scores of co-existing lung lesions were compared. Approximately 1% of patients (152/14,563) had unexpected lung nodules. Among 73 patients with follow-up exams, 69.9% had true positive nodules. Increased abnormality scores for nodules were significantly associated with malignancy (odds ratio [OR] 1.076, P = 0.001). Increased abnormality scores for consolidation (OR 1.033, P = 0.040) and pleural effusion (OR 1.025, P = 0.041) were significantly correlated with active inflammation-type nodules. Abnormality scores for fibrosis (OR 1.036, P = 0.013) and nodules (OR 0.940, P = 0.001) were significantly associated with post-inflammatory sequelae categorization. AI-based lesion-detection software of CXRs in daily practice can help identify clinically significant incidental lung nodules, and referring accompanying lung lesions may help classify the nodule.
Collapse
Affiliation(s)
- Shin Hye Hwang
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea
| | - Hyun Joo Shin
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi‑do, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea
| | - Eun Hye Lee
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi‑do, Republic of Korea
- Division of Pulmonology, Allergy and Critical Care Medicine, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
| | - Minwook Lee
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea.
| |
Collapse
|
8
|
Li MD, Little BP. Appropriate Reliance on Artificial Intelligence in Radiology Education. J Am Coll Radiol 2023; 20:1126-1130. [PMID: 37392983 DOI: 10.1016/j.jacr.2023.04.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/20/2023] [Accepted: 04/06/2023] [Indexed: 07/03/2023]
Abstract
Users of artificial intelligence (AI) can become overreliant on AI, negatively affecting the performance of human-AI teams. For a future in which radiologists use interpretive AI tools routinely in clinical practice, radiology education will need to evolve to provide radiologists with the skills to use AI appropriately and wisely. In this work, we examine how overreliance on AI may develop in radiology trainees and explore how this problem can be mitigated, including through the use of AI-augmented education. Radiology trainees will still need to develop the perceptual skills and mastery of knowledge fundamental to radiology to use AI safely. We propose a framework for radiology trainees to use AI tools with appropriate reliance, drawing on lessons from human-AI interactions research.
Collapse
Affiliation(s)
- Matthew D Li
- Department of Radiology and Diagnostic Imaging, Faculty of Medicine & Dentistry, University of Alberta, Edmonton, Alberta, Canada.
| | - Brent P Little
- Mayo Clinic College of Medicine and Science, Department of Radiology, Division of Cardiothoracic Imaging, Mayo Clinic Florida, Florida; Committee Member, ACR Appropriateness Criteria Thoracic Imaging
| |
Collapse
|
9
|
Dohál M, Porvazník I, Solovič I, Mokrý J. Advancing tuberculosis management: the role of predictive, preventive, and personalized medicine. Front Microbiol 2023; 14:1225438. [PMID: 37860132 PMCID: PMC10582268 DOI: 10.3389/fmicb.2023.1225438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 09/22/2023] [Indexed: 10/21/2023] Open
Abstract
Tuberculosis is a major global health issue, with approximately 10 million people falling ill and 1.4 million dying yearly. One of the most significant challenges to public health is the emergence of drug-resistant tuberculosis. For the last half-century, treating tuberculosis has adhered to a uniform management strategy in most patients. However, treatment ineffectiveness in some individuals with pulmonary tuberculosis presents a major challenge to the global tuberculosis control initiative. Unfavorable outcomes of tuberculosis treatment (including mortality, treatment failure, loss of follow-up, and unevaluated cases) may result in increased transmission of tuberculosis and the emergence of drug-resistant strains. Treatment failure may occur due to drug-resistant strains, non-adherence to medication, inadequate absorption of drugs, or low-quality healthcare. Identifying the underlying cause and adjusting the treatment accordingly to address treatment failure is important. This is where approaches such as artificial intelligence, genetic screening, and whole genome sequencing can play a critical role. In this review, we suggest a set of particular clinical applications of these approaches, which might have the potential to influence decisions regarding the clinical management of tuberculosis patients.
Collapse
Affiliation(s)
- Matúš Dohál
- Biomedical Centre Martin, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava, Martin, Slovakia
| | - Igor Porvazník
- National Institute of Tuberculosis, Lung Diseases and Thoracic Surgery, Vyšné Hágy, Slovakia
- Faculty of Health, Catholic University in Ružomberok, Ružomberok, Slovakia
| | - Ivan Solovič
- National Institute of Tuberculosis, Lung Diseases and Thoracic Surgery, Vyšné Hágy, Slovakia
- Faculty of Health, Catholic University in Ružomberok, Ružomberok, Slovakia
| | - Juraj Mokrý
- Department of Pharmacology, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava, Martin, Slovakia
| |
Collapse
|
10
|
Patel K, Huang S, Rashid A, Varghese B, Gholamrezanezhad A. A Narrative Review of the Use of Artificial Intelligence in Breast, Lung, and Prostate Cancer. Life (Basel) 2023; 13:2011. [PMID: 37895393 PMCID: PMC10608739 DOI: 10.3390/life13102011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 09/30/2023] [Accepted: 09/30/2023] [Indexed: 10/29/2023] Open
Abstract
Artificial intelligence (AI) has been an important topic within radiology. Currently, AI is used clinically to assist with the detection of lesions through detection systems. However, a number of recent studies have demonstrated the increased value of neural networks in radiology. With an increasing number of screening requirements for cancers, this review aims to study the accuracy of the numerous AI models used in the detection and diagnosis of breast, lung, and prostate cancers. This study summarizes pertinent findings from reviewed articles and provides analysis on the relevancy to clinical radiology. This study found that whereas AI is showing continual improvement in radiology, AI alone does not surpass the effectiveness of a radiologist. Additionally, it was found that there are multiple variations on how AI should be integrated with a radiologist's workflow.
Collapse
Affiliation(s)
- Kishan Patel
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA (A.G.)
| | - Sherry Huang
- Department of Urology, University of Pittsburgh Medical Center, Pittsburgh, PA 15213, USA
| | - Arnav Rashid
- Department of Biological Sciences, Dana and David Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, CA 90089, USA
| | - Bino Varghese
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA (A.G.)
| | - Ali Gholamrezanezhad
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA (A.G.)
| |
Collapse
|
11
|
Ueno M, Yoshida K, Takamatsu A, Kobayashi T, Aoki T, Gabata T. Deep learning-based automatic detection for pulmonary nodules on chest radiographs: The relationship with background lung condition, nodule characteristics, and location. Eur J Radiol 2023; 166:111002. [PMID: 37499478 DOI: 10.1016/j.ejrad.2023.111002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 07/11/2023] [Accepted: 07/20/2023] [Indexed: 07/29/2023]
Abstract
PURPOSE Computer-aided diagnosis (CAD), which assists in the interpretation of chest radiographs, is becoming common. However, few studies have evaluated the benefits and pitfalls of CAD in the real world. This study aimed to evaluate the independent performance of commercially available deep learning-based automatic detection (DLAD) software, EIRL Chest X-ray Lung Nodule, in a cohort that included patients with background pulmonary abnormalities often encountered in clinical situations. METHODS Patients with clinically suspected lung cancer for whom chest radiography was performed within a month before or after CT scan between June 2020 and May 2022 in our institution were enrolled. The reference standard was created using a bounding box annotated by two radiologists with reference to the CT. The visibility score, characteristics, location of the pulmonary nodules, presence of overlapping structures or pulmonary disease, and background lung score were manually determined. RESULTS We included 388 patients. The DLAD software detected 222 of the 322 nodules visible on manual evaluation, with a sensitivity of 0.689 and a false-positive rate of 0.168. The detectability of the DLAD software was significantly lower for small and subsolid and nodules with overlapping structures. The visibility score and sensitivity of detection by the DLAD software were positively correlated. The relationship between the background lung score and detection by the DLAD software was unclear. CONCLUSION The standalone performance of DLAD in detecting pulmonary nodules exhibited a sensitivity of 0.689 and a false-positive rate of 0.168. Understanding the characteristics of DLAD is crucial when interpreting chest radiographs with the assistance of the DLAD.
Collapse
Affiliation(s)
- Midori Ueno
- Department of Radiology, Kanazawa University Graduate School of Medical Science, 1-13 Takaramachi, Kanazawa City, Ishikawa Prefecture 920-8641, Japan; Department of Radiology, University of Occupational and Environmental Health School of Medicine, 1-1 Iseigaoka, Kitakyushu City, Fukuoka Prefecture 807-8555, Japan.
| | - Kotaro Yoshida
- Department of Radiology, Kanazawa University Graduate School of Medical Science, 1-13 Takaramachi, Kanazawa City, Ishikawa Prefecture 920-8641, Japan.
| | - Atsushi Takamatsu
- Department of Radiology, Kanazawa University Graduate School of Medical Science, 1-13 Takaramachi, Kanazawa City, Ishikawa Prefecture 920-8641, Japan.
| | - Takeshi Kobayashi
- Department of Diagnostic and Interventional Radiology, Ishikawa Prefectural Central Hospital, 1-2, Kuratsuki-Higashi, Kanazawa City, Ishikawa Prefecture 920-8530, Japan.
| | - Takatoshi Aoki
- Department of Radiology, University of Occupational and Environmental Health School of Medicine, 1-1 Iseigaoka, Kitakyushu City, Fukuoka Prefecture 807-8555, Japan.
| | - Toshifumi Gabata
- Department of Radiology, Kanazawa University Graduate School of Medical Science, 1-13 Takaramachi, Kanazawa City, Ishikawa Prefecture 920-8641, Japan.
| |
Collapse
|
12
|
Jeong D, Jeong W, Lee JH, Park SY. Use of Automated Machine Learning for Classifying Hemoperitoneum on Ultrasonographic Images of Morrison's Pouch: A Multicenter Retrospective Study. J Clin Med 2023; 12:4043. [PMID: 37373736 PMCID: PMC10298902 DOI: 10.3390/jcm12124043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/09/2023] [Accepted: 06/11/2023] [Indexed: 06/29/2023] Open
Abstract
This study evaluated automated machine learning (AutoML) in classifying the presence or absence of hemoperitoneum in ultrasonography (USG) images of Morrison's pouch. In this multicenter, retrospective study, 864 trauma patients from trauma and emergency medical centers in South Korea were included. In all, 2200 USG images (1100 hemoperitoneum and 1100 normal) were collected. Of these, 1800 images were used for training and 200 were used for the internal validation of AutoML. External validation was performed using 100 hemoperitoneum images and 100 normal images collected separately from a trauma center that were not included in the training and internal validation sets. Google's open-source AutoML was used to train the algorithm in classifying hemoperitoneum in USG images, followed by internal and external validation. In the internal validation, the sensitivity, specificity, and area under the receiver operating characteristic (AUROC) curve were 95%, 99%, and 0.97, respectively. In the external validation, the sensitivity, specificity, and AUROC were 94%, 99%, and 0.97, respectively. The performances of AutoML in the internal and external validation were not statistically different (p = 0.78). A publicly available, general-purpose AutoML can accurately classify the presence or absence of hemoperitoneum in USG images of the Morrison's pouch of real-world trauma patients.
Collapse
Affiliation(s)
- Dongkil Jeong
- Department of Emergency Medicine, College of Medicine, Soonchunhyang University, Cheonan 31151, Republic of Korea;
| | - Wonjoon Jeong
- Department of Emergency Medicine, School of Medicine, Chungnam National University, Daejeon 35015, Republic of Korea;
| | - Ji Han Lee
- Division of Emergency Medicine, Department of Medicine, The Catholic University of Korea, Seoul 11765, Republic of Korea
| | - Sin-Youl Park
- Department of Emergency Medicine, College of Medicine, Yeungnam University, Daegu 42415, Republic of Korea
| |
Collapse
|
13
|
Chutivanidchayakul F, Suwatanapongched T, Petnak T. Clinical and chest radiographic features of missed lung cancer and their association with patient outcomes. Clin Imaging 2023; 99:73-81. [PMID: 37121220 DOI: 10.1016/j.clinimag.2023.03.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 03/10/2023] [Accepted: 03/23/2023] [Indexed: 05/02/2023]
Abstract
PURPOSE To examine clinical and chest radiographic features of missed lung cancer (MLC) and explore their association with patient outcomes. METHODS We retrospectively reviewed chest radiographs obtained at least six months before lung cancer (LC) diagnosis in 95 patients to identify the first positive chest radiograph showing MLC. We assessed chest radiographic features of MLC and their association with patient outcomes. RESULTS Seventy-five (78.9%) patients (39 men, 36 women; mean age, 64.5 ± 10.5 years) had MLC. The median diagnostic delay was 31.3 months (6.6-128.0 months). The median MLC size was 16 mm (5-57 mm), and 54.7%, 68.0%, and 74.7% of MLC were in the left lung, the middle/lower zones, and the outer two-thirds of the lung, respectively. MLC exhibited a round/oval shape, partly/poorly defined margin, irregular/spiculated border, a density less than the aortic knob, and anatomical superimposition in 57.3%, 77.3%, 61.3%, 85.3%, and 88.0% of cases, respectively. Thirty-five (46.7%) patients had stage III + IV LC at diagnosis. Thirty-one (41.3%) patients died. MLC in the inner one-third of the lung, exhibiting a density equal to/greater than the aortic knob, or superimposed by midline structures was significantly associated with stage III + IV LC at diagnosis. The 3-year all-cause mortality significantly increased when MLC was in the upper zone, superimposed by pulmonary vessels, superimposed by pulmonary vessels plus ribs, or superimposed by pulmonary vessels plus in the inner one-third of the lung. CONCLUSION MLC with some radiographic features pertaining to their location, density, and superimposed structures was found to portend a worse outcome.
Collapse
Affiliation(s)
- Fonthip Chutivanidchayakul
- Division of Diagnostic Radiology, Department of Diagnostic and Therapeutic Radiology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Thitiporn Suwatanapongched
- Division of Diagnostic Radiology, Department of Diagnostic and Therapeutic Radiology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand.
| | - Tananchai Petnak
- Division of Pulmonary and Pulmonary Critical Care Medicine, Department of Medicine, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| |
Collapse
|
14
|
Xue P, Si M, Qin D, Wei B, Seery S, Ye Z, Chen M, Wang S, Song C, Zhang B, Ding M, Zhang W, Bai A, Yan H, Dang L, Zhao Y, Rezhake R, Zhang S, Qiao Y, Qu Y, Jiang Y. Unassisted Clinicians Versus Deep Learning-Assisted Clinicians in Image-Based Cancer Diagnostics: Systematic Review With Meta-analysis. J Med Internet Res 2023; 25:e43832. [PMID: 36862499 PMCID: PMC10020907 DOI: 10.2196/43832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 01/19/2023] [Accepted: 02/13/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND A number of publications have demonstrated that deep learning (DL) algorithms matched or outperformed clinicians in image-based cancer diagnostics, but these algorithms are frequently considered as opponents rather than partners. Despite the clinicians-in-the-loop DL approach having great potential, no study has systematically quantified the diagnostic accuracy of clinicians with and without the assistance of DL in image-based cancer identification. OBJECTIVE We systematically quantified the diagnostic accuracy of clinicians with and without the assistance of DL in image-based cancer identification. METHODS PubMed, Embase, IEEEXplore, and the Cochrane Library were searched for studies published between January 1, 2012, and December 7, 2021. Any type of study design was permitted that focused on comparing unassisted clinicians and DL-assisted clinicians in cancer identification using medical imaging. Studies using medical waveform-data graphics material and those investigating image segmentation rather than classification were excluded. Studies providing binary diagnostic accuracy data and contingency tables were included for further meta-analysis. Two subgroups were defined and analyzed, including cancer type and imaging modality. RESULTS In total, 9796 studies were identified, of which 48 were deemed eligible for systematic review. Twenty-five of these studies made comparisons between unassisted clinicians and DL-assisted clinicians and provided sufficient data for statistical synthesis. We found a pooled sensitivity of 83% (95% CI 80%-86%) for unassisted clinicians and 88% (95% CI 86%-90%) for DL-assisted clinicians. Pooled specificity was 86% (95% CI 83%-88%) for unassisted clinicians and 88% (95% CI 85%-90%) for DL-assisted clinicians. The pooled sensitivity and specificity values for DL-assisted clinicians were higher than for unassisted clinicians, at ratios of 1.07 (95% CI 1.05-1.09) and 1.03 (95% CI 1.02-1.05), respectively. Similar diagnostic performance by DL-assisted clinicians was also observed across the predefined subgroups. CONCLUSIONS The diagnostic performance of DL-assisted clinicians appears better than unassisted clinicians in image-based cancer identification. However, caution should be exercised, because the evidence provided in the reviewed studies does not cover all the minutiae involved in real-world clinical practice. Combining qualitative insights from clinical practice with data-science approaches may improve DL-assisted practice, although further research is required. TRIAL REGISTRATION PROSPERO CRD42021281372; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=281372.
Collapse
Affiliation(s)
- Peng Xue
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mingyu Si
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Dongxu Qin
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bingrui Wei
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Samuel Seery
- Faculty of Health and Medicine, Division of Health Research, Lancaster University, Lancaster, United Kingdom
| | - Zichen Ye
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mingyang Chen
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Sumeng Wang
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Cheng Song
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bo Zhang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ming Ding
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wenling Zhang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Anying Bai
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Huijiao Yan
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Le Dang
- Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuqian Zhao
- Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science & Technology of China, Sichuan, China
| | - Remila Rezhake
- Affiliated Cancer Hospital, The 3rd Affiliated Teaching Hospital of Xinjiang Medical University, Xinjiang, China
| | - Shaokai Zhang
- Henan Cancer Hospital, Affiliated Cancer Hospital of Zhengzhou University, Henan, China
| | - Youlin Qiao
- Center for Global Health, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yimin Qu
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yu Jiang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
15
|
Nam JG, Hwang EJ, Kim J, Park N, Lee EH, Kim HJ, Nam M, Lee JH, Park CM, Goo JM. AI Improves Nodule Detection on Chest Radiographs in a Health Screening Population: A Randomized Controlled Trial. Radiology 2023; 307:e221894. [PMID: 36749213 DOI: 10.1148/radiol.221894] [Citation(s) in RCA: 24] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Background The impact of artificial intelligence (AI)-based computer-aided detection (CAD) software has not been prospectively explored in real-world populations. Purpose To investigate whether commercial AI-based CAD software could improve the detection rate of actionable lung nodules on chest radiographs in participants undergoing health checkups. Materials and Methods In this single-center, pragmatic, open-label randomized controlled trial, participants who underwent chest radiography between July 2020 and December 2021 in a health screening center were enrolled and randomized into intervention (AI group) and control (non-AI group) arms. One of three designated radiologists with 13-36 years of experience interpreted each radiograph, referring to the AI-based CAD results for the AI group. The primary outcome was the detection rate, that is, the number of true-positive radiographs divided by the total number of radiographs, of actionable lung nodules confirmed on CT scans obtained within 3 months. Actionable nodules were defined as solid nodules larger than 8 mm or subsolid nodules with a solid portion larger than 6 mm (Lung Imaging Reporting and Data System, or Lung-RADS, category 4). Secondary outcomes included the positive-report rate, sensitivity, false-referral rate, and malignant lung nodule detection rate. Clinical outcomes were compared between the two groups using univariable logistic regression analyses. Results A total of 10 476 participants (median age, 59 years [IQR, 50-66 years]; 5121 men) were randomized to an AI group (n = 5238) or non-AI group (n = 5238). The trial met the predefined primary outcome, demonstrating an improved detection rate of actionable nodules in the AI group compared with the non-AI group (0.59% [31 of 5238 participants] vs 0.25% [13 of 5238 participants], respectively; odds ratio, 2.4; 95% CI: 1.3, 4.7; P = .008). The detection rate for malignant lung nodules was higher in the AI group compared with the non-AI group (0.15% [eight of 5238 participants] vs 0.0% [0 of 5238 participants], respectively; P = .008). The AI and non-AI groups showed similar false-referral rates (45.9% [56 of 122 participants] vs 56.0% [56 of 100 participants], respectively; P = .14) and positive-report rates (2.3% [122 of 5238 participants] vs 1.9% [100 of 5238 participants]; P = .14). Conclusion In health checkup participants, artificial intelligence-based software improved the detection of actionable lung nodules on chest radiographs. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Auffermann in this isssue.
Collapse
Affiliation(s)
- Ju Gang Nam
- From the Department of Radiology (J.G.N., E.J.H., J.H.L., C.M.P., J.M.G.), Artificial Intelligence Collaborative Network (J.G.N.), Medical Research Collaborating Center (J.K., N.P.), and Center for Health Promotion and Optimal Aging (E.H.L., M.N.), Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Ewha Womans University Seoul Hospital, Seoul, Republic of Korea (H.J.K.); Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul, Republic of Korea (C.M.P.); and Cancer Research Institute, Seoul National University, Seoul, Republic of Korea (J.M.G.)
| | - Eui Jin Hwang
- From the Department of Radiology (J.G.N., E.J.H., J.H.L., C.M.P., J.M.G.), Artificial Intelligence Collaborative Network (J.G.N.), Medical Research Collaborating Center (J.K., N.P.), and Center for Health Promotion and Optimal Aging (E.H.L., M.N.), Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Ewha Womans University Seoul Hospital, Seoul, Republic of Korea (H.J.K.); Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul, Republic of Korea (C.M.P.); and Cancer Research Institute, Seoul National University, Seoul, Republic of Korea (J.M.G.)
| | - Jayoun Kim
- From the Department of Radiology (J.G.N., E.J.H., J.H.L., C.M.P., J.M.G.), Artificial Intelligence Collaborative Network (J.G.N.), Medical Research Collaborating Center (J.K., N.P.), and Center for Health Promotion and Optimal Aging (E.H.L., M.N.), Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Ewha Womans University Seoul Hospital, Seoul, Republic of Korea (H.J.K.); Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul, Republic of Korea (C.M.P.); and Cancer Research Institute, Seoul National University, Seoul, Republic of Korea (J.M.G.)
| | - Nanhee Park
- From the Department of Radiology (J.G.N., E.J.H., J.H.L., C.M.P., J.M.G.), Artificial Intelligence Collaborative Network (J.G.N.), Medical Research Collaborating Center (J.K., N.P.), and Center for Health Promotion and Optimal Aging (E.H.L., M.N.), Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Ewha Womans University Seoul Hospital, Seoul, Republic of Korea (H.J.K.); Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul, Republic of Korea (C.M.P.); and Cancer Research Institute, Seoul National University, Seoul, Republic of Korea (J.M.G.)
| | - Eun Hee Lee
- From the Department of Radiology (J.G.N., E.J.H., J.H.L., C.M.P., J.M.G.), Artificial Intelligence Collaborative Network (J.G.N.), Medical Research Collaborating Center (J.K., N.P.), and Center for Health Promotion and Optimal Aging (E.H.L., M.N.), Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Ewha Womans University Seoul Hospital, Seoul, Republic of Korea (H.J.K.); Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul, Republic of Korea (C.M.P.); and Cancer Research Institute, Seoul National University, Seoul, Republic of Korea (J.M.G.)
| | - Hyun Jin Kim
- From the Department of Radiology (J.G.N., E.J.H., J.H.L., C.M.P., J.M.G.), Artificial Intelligence Collaborative Network (J.G.N.), Medical Research Collaborating Center (J.K., N.P.), and Center for Health Promotion and Optimal Aging (E.H.L., M.N.), Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Ewha Womans University Seoul Hospital, Seoul, Republic of Korea (H.J.K.); Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul, Republic of Korea (C.M.P.); and Cancer Research Institute, Seoul National University, Seoul, Republic of Korea (J.M.G.)
| | - Miyeon Nam
- From the Department of Radiology (J.G.N., E.J.H., J.H.L., C.M.P., J.M.G.), Artificial Intelligence Collaborative Network (J.G.N.), Medical Research Collaborating Center (J.K., N.P.), and Center for Health Promotion and Optimal Aging (E.H.L., M.N.), Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Ewha Womans University Seoul Hospital, Seoul, Republic of Korea (H.J.K.); Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul, Republic of Korea (C.M.P.); and Cancer Research Institute, Seoul National University, Seoul, Republic of Korea (J.M.G.)
| | - Jong Hyuk Lee
- From the Department of Radiology (J.G.N., E.J.H., J.H.L., C.M.P., J.M.G.), Artificial Intelligence Collaborative Network (J.G.N.), Medical Research Collaborating Center (J.K., N.P.), and Center for Health Promotion and Optimal Aging (E.H.L., M.N.), Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Ewha Womans University Seoul Hospital, Seoul, Republic of Korea (H.J.K.); Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul, Republic of Korea (C.M.P.); and Cancer Research Institute, Seoul National University, Seoul, Republic of Korea (J.M.G.)
| | - Chang Min Park
- From the Department of Radiology (J.G.N., E.J.H., J.H.L., C.M.P., J.M.G.), Artificial Intelligence Collaborative Network (J.G.N.), Medical Research Collaborating Center (J.K., N.P.), and Center for Health Promotion and Optimal Aging (E.H.L., M.N.), Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Ewha Womans University Seoul Hospital, Seoul, Republic of Korea (H.J.K.); Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul, Republic of Korea (C.M.P.); and Cancer Research Institute, Seoul National University, Seoul, Republic of Korea (J.M.G.)
| | - Jin Mo Goo
- From the Department of Radiology (J.G.N., E.J.H., J.H.L., C.M.P., J.M.G.), Artificial Intelligence Collaborative Network (J.G.N.), Medical Research Collaborating Center (J.K., N.P.), and Center for Health Promotion and Optimal Aging (E.H.L., M.N.), Seoul National University Hospital and College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Ewha Womans University Seoul Hospital, Seoul, Republic of Korea (H.J.K.); Institute of Medical and Biological Engineering, Seoul National University Medical Research Center, Seoul, Republic of Korea (C.M.P.); and Cancer Research Institute, Seoul National University, Seoul, Republic of Korea (J.M.G.)
| |
Collapse
|
16
|
Milam ME, Koo CW. The current status and future of FDA-approved artificial intelligence tools in chest radiology in the United States. Clin Radiol 2023; 78:115-122. [PMID: 36180271 DOI: 10.1016/j.crad.2022.08.135] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 08/19/2022] [Indexed: 01/18/2023]
Abstract
Artificial intelligence (AI) is becoming more widespread within radiology. Capabilities that AI algorithms currently provide include detection, segmentation, classification, and quantification of pathological findings. Artificial intelligence software have created challenges for the traditional United States Food and Drug Administration (FDA) approval process for medical devices given their abilities to evolve over time with incremental data input. Currently, there are 190 FDA-approved radiology AI-based software devices, 42 of which pertain specifically to thoracic radiology. The majority of these algorithms are approved for the detection and/or analysis of pulmonary nodules, for monitoring placement of endotracheal tubes and indwelling catheters, for detection of emergent findings, and for assessment of pulmonary parenchyma; however, as technology evolves, there are many other potential applications that can be explored. For example, evaluation of non-idiopathic pulmonary fibrosis interstitial lung diseases, synthesis of imaging, clinical and/or laboratory data to yield comprehensive diagnoses, and survival or prognosis prediction of certain pathologies. With increasing physician and developer engagement, transparency and frequent communication between developers and regulatory agencies, such as the FDA, AI medical devices will be able to provide a critical supplement to patient management and ultimately enhance physicians' ability to improve patient care.
Collapse
Affiliation(s)
- M E Milam
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - C W Koo
- Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
17
|
Dasegowda G, Kalra MK, Abi-Ghanem AS, Arru CD, Bernardo M, Saba L, Segota D, Tabrizi Z, Viswamitra S, Kaviani P, Karout L, Dreyer KJ. Suboptimal Chest Radiography and Artificial Intelligence: The Problem and the Solution. Diagnostics (Basel) 2023; 13:412. [PMID: 36766516 PMCID: PMC9914850 DOI: 10.3390/diagnostics13030412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 01/20/2023] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Chest radiographs (CXR) are the most performed imaging tests and rank high among the radiographic exams with suboptimal quality and high rejection rates. Suboptimal CXRs can cause delays in patient care and pitfalls in radiographic interpretation, given their ubiquitous use in the diagnosis and management of acute and chronic ailments. Suboptimal CXRs can also compound and lead to high inter-radiologist variations in CXR interpretation. While advances in radiography with transitions to computerized and digital radiography have reduced the prevalence of suboptimal exams, the problem persists. Advances in machine learning and artificial intelligence (AI), particularly in the radiographic acquisition, triage, and interpretation of CXRs, could offer a plausible solution for suboptimal CXRs. We review the literature on suboptimal CXRs and the potential use of AI to help reduce the prevalence of suboptimal CXRs.
Collapse
Affiliation(s)
- Giridhar Dasegowda
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office (DSO), Boston, MA 02114, USA
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office (DSO), Boston, MA 02114, USA
| | - Alain S. Abi-Ghanem
- Department of Diagnostic Radiology, American University of Beirut Medical Center, Beirut 11-0236, Lebanon
| | - Chiara D. Arru
- Department of Radiology, Azienda Ospedaliera G. Brotzu, 09134 Cagliari, Italy
| | - Monica Bernardo
- Department of Radiology, Hospital Miguel Soeiro—UNIMED, Sorocaba 18052-210, Brazil
- Department of Radiology, Pontificia University Catholic of São Paulo, São Paulo 05014-901, Brazil
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliera Universitaria di Cagliari, 09123 Cagliari, Italy
| | - Doris Segota
- Medical Physics and Radiation Protection Department, Clinical Hospital Centre Rijeka, 51000 Rijeka, Croatia
| | - Zhale Tabrizi
- Radiology Department, Iran University of Medical Sciences, Tehran 14535, Iran
| | - Sanjaya Viswamitra
- Department of Radiodiagnosis, Sri Sathya Sai Institute of Higher Medical Sciences, Whitefield 560066, India
| | - Parisa Kaviani
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office (DSO), Boston, MA 02114, USA
| | - Lina Karout
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office (DSO), Boston, MA 02114, USA
| | - Keith J. Dreyer
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office (DSO), Boston, MA 02114, USA
| |
Collapse
|
18
|
van Beek EJR, Ahn JS, Kim MJ, Murchison JT. Validation study of machine-learning chest radiograph software in primary and emergency medicine. Clin Radiol 2023; 78:1-7. [PMID: 36171164 DOI: 10.1016/j.crad.2022.08.129] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 07/20/2022] [Accepted: 08/08/2022] [Indexed: 01/07/2023]
Abstract
AIM To evaluate the performance of a machine learning based algorithm tool for chest radiographs (CXRs), applied to a consecutive cohort of historical clinical cases, in comparison to expert chest radiologists. MATERIALS AND METHODS The study comprised 1,960 consecutive CXR from primary care referrals and the emergency department (992 and 968 cases respectively), obtained in 2015 at a UK hospital. Two chest radiologists, each with >20 years of experience independently read all studies in consensus to serve as a reference standard. A chest artificial intelligence (AI) algorithm, Lunit INSIGHT CXR, was run on the CXRs, and results were correlated with those by the expert readers. The area under the receiver operating characteristic curve (AUROC) was calculated for the normal and 10 common findings: atelectasis, fibrosis, calcification, consolidation, lung nodules, cardiomegaly, mediastinal widening, pleural effusion, pneumothorax, and pneumoperitoneum. RESULTS The ground truth annotation identified 398 primary care and 578 emergency department datasets containing pathologies. The AI algorithm showed AUROC of 0.881-0.999 in the emergency department dataset and 0.881-0.998 in the primary care dataset. The AUROC for each of the findings between the primary care and emergency department datasets did not differ, except for pleural effusion (0.954 versus 0.988, p<0.001). CONCLUSIONS The AI algorithm can accurately and consistently differentiate normal from major thoracic abnormalities in both acute and non-acute settings, and can serve as a triage tool.
Collapse
Affiliation(s)
- E J R van Beek
- Edinburgh Imaging, Queen's Medical Research Institute, University of Edinburgh, Edinburgh, UK; Department of Radiology, Royal Infirmary of Edinburgh, Edinburgh, UK.
| | | | | | - J T Murchison
- Department of Radiology, Royal Infirmary of Edinburgh, Edinburgh, UK
| |
Collapse
|
19
|
Kwak SH, Kim EK, Kim MH, Lee EH, Shin HJ. Incidentally found resectable lung cancer with the usage of artificial intelligence on chest radiographs. PLoS One 2023; 18:e0281690. [PMID: 36897865 PMCID: PMC10004566 DOI: 10.1371/journal.pone.0281690] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 01/29/2023] [Indexed: 03/11/2023] Open
Abstract
PURPOSE Detection of early lung cancer using chest radiograph remains challenging. We aimed to highlight the benefit of using artificial intelligence (AI) in chest radiograph with regard to its role in the unexpected detection of resectable early lung cancer. MATERIALS AND METHODS Patients with pathologically proven resectable lung cancer from March 2020 to February 2022 were retrospectively analyzed. Among them, we included patients with incidentally detected resectable lung cancer. Because commercially available AI-based lesion detection software was integrated for all chest radiographs in our hospital, we reviewed the clinical process of detecting lung cancer using AI in chest radiographs. RESULTS Among the 75 patients with pathologically proven resectable lung cancer, 13 (17.3%) had incidentally discovered lung cancer with a median size of 2.6 cm. Eight patients underwent chest radiograph for the evaluation of extrapulmonary diseases, while five underwent radiograph in preparation of an operation or procedure concerning other body parts. All lesions were detected as nodules by the AI-based software, and the median abnormality score for the nodules was 78%. Eight patients (61.5%) consulted a pulmonologist promptly on the same day when the chest radiograph was taken and before they received the radiologist's official report. Total and invasive sizes of the part-solid nodules were 2.3-3.3 cm and 0.75-2.2 cm, respectively. CONCLUSION This study demonstrates actual cases of unexpectedly detected resectable early lung cancer using AI-based lesion detection software. Our results suggest that AI is beneficial for incidental detection of early lung cancer in chest radiographs.
Collapse
Affiliation(s)
- Se Hyun Kwak
- Division of Pulmonology, Department of Internal Medicine, Allergy and Critical Care Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
| | - Myung Hyun Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
| | - Eun Hye Lee
- Division of Pulmonology, Department of Internal Medicine, Allergy and Critical Care Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
- * E-mail: (EHL); (HJS)
| | - Hyun Joo Shin
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
- * E-mail: (EHL); (HJS)
| |
Collapse
|
20
|
de Margerie-Mellon C, Chassagnon G. Artificial intelligence: A critical review of applications for lung nodule and lung cancer. Diagn Interv Imaging 2023; 104:11-17. [PMID: 36513593 DOI: 10.1016/j.diii.2022.11.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) is a broad concept that usually refers to computer programs that can learn from data and perform certain specific tasks. In the recent years, the growth of deep learning, a successful technique for computer vision tasks that does not require explicit programming, coupled with the availability of large imaging databases fostered the development of multiple applications in the medical imaging field, especially for lung nodules and lung cancer, mostly through convolutional neural networks (CNN). Some of the first applications of AI is this field were dedicated to automated detection of lung nodules on X-ray and computed tomography (CT) examinations, with performances now reaching or exceeding those of radiologists. For lung nodule segmentation, CNN-based algorithms applied to CT images show excellent spatial overlap index with manual segmentation, even for irregular and ground glass nodules. A third application of AI is the classification of lung nodules between malignant and benign, which could limit the number of follow-up CT examinations for less suspicious lesions. Several algorithms have demonstrated excellent capabilities for the prediction of the malignancy risk when a nodule is discovered. These different applications of AI for lung nodules are particularly appealing in the context of lung cancer screening. In the field of lung cancer, AI tools applied to lung imaging have been investigated for distinct aims. First, they could play a role for the non-invasive characterization of tumors, especially for histological subtype and somatic mutation predictions, with a potential therapeutic impact. Additionally, they could help predict the patient prognosis, in combination to clinical data. Despite these encouraging perspectives, clinical implementation of AI tools is only beginning because of the lack of generalizability of published studies, of an inner obscure working and because of limited data about the impact of such tools on the radiologists' decision and on the patient outcome. Radiologists must be active participants in the process of evaluating AI tools, as such tools could support their daily work and offer them more time for high added value tasks.
Collapse
Affiliation(s)
- Constance de Margerie-Mellon
- Université Paris Cité, Laboratory of Imaging Biomarkers, Center for Research on Inflammation, UMR 1149, INSERM, 75018 Paris, France; Department of Radiology, Hôpital Saint-Louis APHP, 75010 Paris, France
| | - Guillaume Chassagnon
- Université Paris Cité, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin APHP, 75014 Paris, France
| |
Collapse
|
21
|
Impact of Artificial Intelligence Assistance on Chest CT Interpretation Times: A Prospective Randomized Study. AJR Am J Roentgenol 2022; 219:743-751. [PMID: 35703413 DOI: 10.2214/ajr.22.27598] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
BACKGROUND. Deep learning-based convolutional neural networks have enabled major advances in development of artificial intelligence (AI) software applications. Modern AI applications offer comprehensive multiorgan evaluation. OBJECTIVE. The purpose of this article was to evaluate the impact of an automated AI platform integrated into clinical workflow for chest CT interpretation on radiologists' interpretation times when evaluated in a real-world clinical setting. METHODS. In this prospective single-center study, a commercial AI software solution was integrated into clinical workflow for chest CT interpretation. The software provided automated analysis of cardiac, pulmonary, and musculoskeletal findings, including labeling, segmenting, and measuring normal structures as well as detecting, labeling, and measuring abnormalities. AI-annotated images and autogenerated summary results were stored in the PACS and available to interpreting radiologists. A total of 390 patients (204 women, 186 men; mean age, 62.8 ± 13.3 [SD] years) who underwent out-patient chest CT between January 19, 2021, and January 28, 2021, were included. Scans were randomized using 1:1 allocation between AI-assisted and non-AI-assisted arms and were clinically interpreted by one of three cardiothoracic radiologists (65 scans per arm per radiologist; total of 195 scans per arm) who recorded interpretation times using a stopwatch. Findings were categorized according to review of report impressions. Interpretation times were compared between arms. RESULTS. Mean interpretation times were significantly shorter in the AI-assisted than in the non-AI-assisted arm for all three readers (289 ± 89 vs 344 ± 129 seconds, p < .001; 449 ± 110 vs 649 ± 82 seconds, p < .001; 281 ± 114 vs 348 ± 93 seconds, p = .01) and for readers combined (328 ± 122 vs 421 ± 175 seconds, p < .001). For readers combined, the mean difference was 93 seconds (95% CI, 63-123 seconds), corresponding with a 22.1% reduction in the AI-assisted arm. Mean interpretation time was also shorter in the AI-assisted arm compared with the non-AI-assisted arm for contrast-enhanced scans (83 seconds), noncontrast scans (104 seconds), negative scans (84 seconds), positive scans without significant new findings (117 seconds), and positive scans with significant new findings (92 seconds). CONCLUSION. Cardiothoracic radiologists exhibited a 22.1% reduction in chest CT interpretations times when they had access to results from an automated AI support platform during real-world clinical practice. CLINICAL IMPACT. Integration of the AI support platform into clinical workflow improved radiologist efficiency.
Collapse
|
22
|
Artificial Intelligence (AI) for Lung Nodules, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2022; 219:703-712. [PMID: 35544377 DOI: 10.2214/ajr.22.27487] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Interest in artificial intelligence (AI) applications for lung nodules continues to grow among radiologists, particularly with the expanding eligibility criteria and clinical utilization of lung cancer screening CT. AI has been heavily investigated for detecting and characterizing lung nodules and for guiding prognostic assessment. AI tools have also been used for image postprocessing (e.g., rib suppression on radiography or vessel suppression on CT) and for noninterpretive aspects of reporting and workflow, including management of nodule follow-up. Despite growing interest in and rapid development of AI tools and FDA approval of AI tools for pulmonary nodule evaluation, integration into clinical practice has been limited. Challenges to clinical adoption have included concerns about generalizability, regulatory issues, technical hurdles in implementation, and human skepticism. Further validation of AI tools for clinical use and demonstration of benefit in terms of patient-oriented outcomes also are needed. This article provides an overview of potential applications of AI tools in the imaging evaluation of lung nodules and discusses the challenges faced by practices interested in clinical implementation of such tools.
Collapse
|
23
|
Niu C, Wang G. Unsupervised contrastive learning based transformer for lung nodule detection. Phys Med Biol 2022; 67:10.1088/1361-6560/ac92ba. [PMID: 36113445 PMCID: PMC10040209 DOI: 10.1088/1361-6560/ac92ba] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 09/16/2022] [Indexed: 11/12/2022]
Abstract
Objective.Early detection of lung nodules with computed tomography (CT) is critical for the longer survival of lung cancer patients and better quality of life. Computer-aided detection/diagnosis (CAD) is proven valuable as a second or concurrent reader in this context. However, accurate detection of lung nodules remains a challenge for such CAD systems and even radiologists due to not only the variability in size, location, and appearance of lung nodules but also the complexity of lung structures. This leads to a high false-positive rate with CAD, compromising its clinical efficacy.Approach.Motivated by recent computer vision techniques, here we present a self-supervised region-based 3D transformer model to identify lung nodules among a set of candidate regions. Specifically, a 3D vision transformer is developed that divides a CT volume into a sequence of non-overlap cubes, extracts embedding features from each cube with an embedding layer, and analyzes all embedding features with a self-attention mechanism for the prediction. To effectively train the transformer model on a relatively small dataset, the region-based contrastive learning method is used to boost the performance by pre-training the 3D transformer with public CT images.Results.Our experiments show that the proposed method can significantly improve the performance of lung nodule screening in comparison with the commonly used 3D convolutional neural networks.Significance.This study demonstrates a promising direction to improve the performance of current CAD systems for lung nodule detection.
Collapse
Affiliation(s)
- Chuang Niu
- Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| | - Ge Wang
- Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| |
Collapse
|
24
|
Lee SY, Ha S, Jeon MG, Li H, Choi H, Kim HP, Choi YR, I H, Jeong YJ, Park YH, Ahn H, Hong SH, Koo HJ, Lee CW, Kim MJ, Kim YJ, Kim KW, Choi JM. Localization-adjusted diagnostic performance and assistance effect of a computer-aided detection system for pneumothorax and consolidation. NPJ Digit Med 2022; 5:107. [PMID: 35908091 PMCID: PMC9339006 DOI: 10.1038/s41746-022-00658-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 07/11/2022] [Indexed: 11/24/2022] Open
Abstract
While many deep-learning-based computer-aided detection systems (CAD) have been developed and commercialized for abnormality detection in chest radiographs (CXR), their ability to localize a target abnormality is rarely reported. Localization accuracy is important in terms of model interpretability, which is crucial in clinical settings. Moreover, diagnostic performances are likely to vary depending on thresholds which define an accurate localization. In a multi-center, stand-alone clinical trial using temporal and external validation datasets of 1,050 CXRs, we evaluated localization accuracy, localization-adjusted discrimination, and calibration of a commercially available deep-learning-based CAD for detecting consolidation and pneumothorax. The CAD achieved image-level AUROC (95% CI) of 0.960 (0.945, 0.975), sensitivity of 0.933 (0.899, 0.959), specificity of 0.948 (0.930, 0.963), dice of 0.691 (0.664, 0.718), moderate calibration for consolidation, and image-level AUROC of 0.978 (0.965, 0.991), sensitivity of 0.956 (0.923, 0.978), specificity of 0.996 (0.989, 0.999), dice of 0.798 (0.770, 0.826), moderate calibration for pneumothorax. Diagnostic performances varied substantially when localization accuracy was accounted for but remained high at the minimum threshold of clinical relevance. In a separate trial for diagnostic impact using 461 CXRs, the causal effect of the CAD assistance on clinicians' diagnostic performances was estimated. After adjusting for age, sex, dataset, and abnormality type, the CAD improved clinicians' diagnostic performances on average (OR [95% CI] = 1.73 [1.30, 2.32]; p < 0.001), although the effects varied substantially by clinical backgrounds. The CAD was found to have high stand-alone diagnostic performances and may beneficially impact clinicians' diagnostic performances when used in clinical settings.
Collapse
Affiliation(s)
- Sun Yeop Lee
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Sangwoo Ha
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Min Gyeong Jeon
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Hao Li
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Hyunju Choi
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Hwa Pyung Kim
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Ye Ra Choi
- Department of Radiology, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Hoseok I
- Department of Thoracic and Cardiovascular Surgery, Pusan National University School of Medicine, Busan, Republic of Korea
- Convergence Medical Institute of Technology, Biomedical Research Institute, Pusan National University Hospital, Busan, Republic of Korea
| | - Yeon Joo Jeong
- Department of Radiology and Biomedical Research Institute, Pusan National University Hospital, Busan, Republic of Korea
| | - Yoon Ha Park
- Department of Internal Medicine, Jawol Health Center, Incheon, Republic of Korea
| | - Hyemin Ahn
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Hyup Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Hyun Jung Koo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Choong Wook Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Min Jae Kim
- Department of Infectious Disease, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Yeon Joo Kim
- Department of Respiratory Allergy Medicine, Nowon Eulji Medical Center, Seoul, Republic of Korea
| | - Kyung Won Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jong Mun Choi
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea.
| |
Collapse
|
25
|
Development and Validation of a Multimodal-Based Prognosis and Intervention Prediction Model for COVID-19 Patients in a Multicenter Cohort. SENSORS 2022; 22:s22135007. [PMID: 35808502 PMCID: PMC9269794 DOI: 10.3390/s22135007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/29/2022] [Accepted: 06/29/2022] [Indexed: 02/04/2023]
Abstract
The ability to accurately predict the prognosis and intervention requirements for treating highly infectious diseases, such as COVID-19, can greatly support the effective management of patients, especially in resource-limited settings. The aim of the study is to develop and validate a multimodal artificial intelligence (AI) system using clinical findings, laboratory data and AI-interpreted features of chest X-rays (CXRs), and to predict the prognosis and the required interventions for patients diagnosed with COVID-19, using multi-center data. In total, 2282 real-time reverse transcriptase polymerase chain reaction-confirmed COVID-19 patients’ initial clinical findings, laboratory data and CXRs were retrospectively collected from 13 medical centers in South Korea, between January 2020 and June 2021. The prognostic outcomes collected included intensive care unit (ICU) admission and in-hospital mortality. Intervention outcomes included the use of oxygen (O2) supplementation, mechanical ventilation and extracorporeal membrane oxygenation (ECMO). A deep learning algorithm detecting 10 common CXR abnormalities (DLAD-10) was used to infer the initial CXR taken. A random forest model with a quantile classifier was used to predict the prognostic and intervention outcomes, using multimodal data. The area under the receiver operating curve (AUROC) values for the single-modal model, using clinical findings, laboratory data and the outputs from DLAD-10, were 0.742 (95% confidence interval [CI], 0.696−0.788), 0.794 (0.745−0.843) and 0.770 (0.724−0.815), respectively. The AUROC of the combined model, using clinical findings, laboratory data and DLAD-10 outputs, was significantly higher at 0.854 (0.820−0.889) than that of all other models (p < 0.001, using DeLong’s test). In the order of importance, age, dyspnea, consolidation and fever were significant clinical variables for prediction. The most predictive DLAD-10 output was consolidation. We have shown that a multimodal AI model can improve the performance of predicting both the prognosis and intervention in COVID-19 patients, and this could assist in effective treatment and subsequent resource management. Further, image feature extraction using an established AI engine with well-defined clinical outputs, and combining them with different modes of clinical data, could be a useful way of creating an understandable multimodal prediction model.
Collapse
|
26
|
Shin HJ, Son NH, Kim MJ, Kim EK. Diagnostic performance of artificial intelligence approved for adults for the interpretation of pediatric chest radiographs. Sci Rep 2022; 12:10215. [PMID: 35715623 PMCID: PMC9204675 DOI: 10.1038/s41598-022-14519-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/08/2022] [Indexed: 11/24/2022] Open
Abstract
Artificial intelligence (AI) applied to pediatric chest radiographs are yet scarce. This study evaluated whether AI-based software developed for adult chest radiographs can be used for pediatric chest radiographs. Pediatric patients (≤ 18 years old) who underwent chest radiographs from March to May 2021 were included retrospectively. An AI-based lesion detection software assessed the presence of nodules, consolidation, fibrosis, atelectasis, cardiomegaly, pleural effusion, pneumothorax, and pneumoperitoneum. Using the pediatric radiologist’s results as standard reference, we assessed the diagnostic performance of the software. For the total 2273 chest radiographs, the AI-based software showed a sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of 67.2%, 91.1%, 57.7%, 93.9%, and 87.5%, respectively. Age was a significant factor for incorrect results (odds radio 0.821, 95% confidence interval 0.791–0.851). When we excluded cardiomegaly and children 2 years old or younger, sensitivity, specificity, PPV, NPV and accuracy significantly increased (86.4%, 97.9%, 79.7%, 98.7% and 96.9%, respectively, all p < 0.001). In conclusion, AI-based software developed with adult chest radiographs showed diagnostic accuracies up to 96.9% for pediatric chest radiographs when we excluded cardiomegaly and children 2 years old or younger. AI-based lesion detection software needs to be validated in younger children.
Collapse
Affiliation(s)
- Hyun Joo Shin
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea
| | - Nak-Hoon Son
- Department of Statistics, Keimyung University, 1095, Dalgubeol-daero, Dalseo-gu, Daegu , 42601, Republic of Korea
| | - Min Jung Kim
- Department of Pediatrics, Institute of Allergy, Institute for Immunology and Immunological Diseases, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea.
| |
Collapse
|
27
|
Diagnostic effect of artificial intelligence solution for referable thoracic abnormalities on chest radiography: a multicenter respiratory outpatient diagnostic cohort study. Eur Radiol 2022; 32:3469-3479. [PMID: 34973101 PMCID: PMC9038825 DOI: 10.1007/s00330-021-08397-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 09/06/2021] [Accepted: 10/10/2021] [Indexed: 01/17/2023]
Abstract
Objectives We aim
ed to evaluate a commercial artificial intelligence (AI) solution on a multicenter cohort of chest radiographs and to compare physicians' ability to detect and localize referable thoracic abnormalities with and without AI assistance. Methods In this retrospective diagnostic cohort study, we investigated 6,006 consecutive patients who underwent both chest radiography and CT. We evaluated a commercially available AI solution intended to facilitate the detection of three chest abnormalities (nodule/masses, consolidation, and pneumothorax) against a reference standard to measure its diagnostic performance. Moreover, twelve physicians, including thoracic radiologists, board-certified radiologists, radiology residents, and pulmonologists, assessed a dataset of 230 randomly sampled chest radiographic images. The images were reviewed twice per physician, with and without AI, with a 4-week washout period. We measured the impact of AI assistance on observer's AUC, sensitivity, specificity, and the area under the alternative free-response ROC (AUAFROC). Results In the entire set (n = 6,006), the AI solution showed average sensitivity, specificity, and AUC of 0.885, 0.723, and 0.867, respectively. In the test dataset (n = 230), the average AUC and AUAFROC across observers significantly increased with AI assistance (from 0.861 to 0.886; p = 0.003 and from 0.797 to 0.822; p = 0.003, respectively). Conclusions The diagnostic performance of the AI solution was found to be acceptable for the images from respiratory outpatient clinics. The diagnostic performance of physicians marginally improved with the use of AI solutions. Further evaluation of AI assistance for chest radiographs using a prospective design is required to prove the efficacy of AI assistance. Key Points • AI assistance for chest radiographs marginally improved physicians’ performance in detecting and localizing referable thoracic abnormalities on chest radiographs. • The detection or localization of referable thoracic abnormalities by pulmonologists and radiology residents improved with the use of AI assistance. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-08397-5.
Collapse
|
28
|
Homayounieh F, Digumarthy S, Ebrahimian S, Rueckel J, Hoppe BF, Sabel BO, Conjeti S, Ridder K, Sistermanns M, Wang L, Preuhs A, Ghesu F, Mansoor A, Moghbel M, Botwin A, Singh R, Cartmell S, Patti J, Huemmer C, Fieselmann A, Joerger C, Mirshahzadeh N, Muse V, Kalra M. An Artificial Intelligence-Based Chest X-ray Model on Human Nodule Detection Accuracy From a Multicenter Study. JAMA Netw Open 2021; 4:e2141096. [PMID: 34964851 PMCID: PMC8717119 DOI: 10.1001/jamanetworkopen.2021.41096] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
IMPORTANCE Most early lung cancers present as pulmonary nodules on imaging, but these can be easily missed on chest radiographs. OBJECTIVE To assess if a novel artificial intelligence (AI) algorithm can help detect pulmonary nodules on radiographs at different levels of detection difficulty. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study included 100 posteroanterior chest radiograph images taken between 2000 and 2010 of adult patients from an ambulatory health care center in Germany and a lung image database in the US. Included images were selected to represent nodules with different levels of detection difficulties (from easy to difficult), and comprised both normal and nonnormal control. EXPOSURES All images were processed with a novel AI algorithm, the AI Rad Companion Chest X-ray. Two thoracic radiologists established the ground truth and 9 test radiologists from Germany and the US independently reviewed all images in 2 sessions (unaided and AI-aided mode) with at least a 1-month washout period. MAIN OUTCOMES AND MEASURES Each test radiologist recorded the presence of 5 findings (pulmonary nodules, atelectasis, consolidation, pneumothorax, and pleural effusion) and their level of confidence for detecting the individual finding on a scale of 1 to 10 (1 representing lowest confidence; 10, highest confidence). The analyzed metrics for nodules included sensitivity, specificity, accuracy, and receiver operating characteristics curve area under the curve (AUC). RESULTS Images from 100 patients were included, with a mean (SD) age of 55 (20) years and including 64 men and 36 women. Mean detection accuracy across the 9 radiologists improved by 6.4% (95% CI, 2.3% to 10.6%) with AI-aided interpretation compared with unaided interpretation. Partial AUCs within the effective interval range of 0 to 0.2 false positive rate improved by 5.6% (95% CI, -1.4% to 12.0%) with AI-aided interpretation. Junior radiologists saw greater improvement in sensitivity for nodule detection with AI-aided interpretation as compared with their senior counterparts (12%; 95% CI, 4% to 19% vs 9%; 95% CI, 1% to 17%) while senior radiologists experienced similar improvement in specificity (4%; 95% CI, -2% to 9%) as compared with junior radiologists (4%; 95% CI, -3% to 5%). CONCLUSIONS AND RELEVANCE In this diagnostic study, an AI algorithm was associated with improved detection of pulmonary nodules on chest radiographs compared with unaided interpretation for different levels of detection difficulty and for readers with different experience.
Collapse
Affiliation(s)
- Fatemeh Homayounieh
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Subba Digumarthy
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Shadi Ebrahimian
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Johannes Rueckel
- Department of Radiology, University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | - Boj Friedrich Hoppe
- Department of Radiology, University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | - Bastian Oliver Sabel
- Department of Radiology, University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | | | - Karsten Ridder
- Medizinisches Versorgungszentrum Professor Uhlenbrock & Partner
| | | | | | | | | | | | - Mateen Moghbel
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Ariel Botwin
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Samuel Cartmell
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - John Patti
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | | | | | | | | | - Victorine Muse
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Mannudeep Kalra
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
29
|
Dyer T, Chawda S, Alkilani R, Morgan TN, Hughes M, Rasalingham S. Validation of an artificial intelligence solution for acute triage and rule-out normal of non-contrast CT head scans. Neuroradiology 2021; 64:735-743. [PMID: 34623478 DOI: 10.1007/s00234-021-02826-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 09/10/2021] [Indexed: 11/29/2022]
Abstract
PURPOSE Non-contrast CT head scans provide rapid and accurate diagnosis of acute head injury; however, increased utilisation of CT head scans makes it difficult to prioritise acutely unwell patients and places pressure on busy emergency departments (EDs). This study validates an AI algorithm to triage patients presenting with Intracranial Haemorrhage (ICH) or Acute Infarct whilst also identifying a subset of patients as Normal, with the potential to function as a rule-out test. METHODS In total, 390 CT head scans were collected from 3 institutions in the UK, US and India. Ground-truth labels were assigned by 3 FRCR consultant radiologists. AI performance, as well as the performance of 3 independent radiologists, was measured against ground-truth labels. RESULTS The algorithm showed AUC values of 0.988 (0.978-0.994), 0.933 (0.901-0.961) and 0.939 (0.919-0.958) for ICH, Acute Infarct and Normal, respectively. Sensitivity/specificity for ICH and Acute Infarct were 0.988/0.925 and 0.833/0.927, respectively, compared to 0.907/0.991 and 0.618/0.977 for radiologists. AI rule-out of Normal scans achieved 0.93% negative predictive value (NPV) for the removal of 54.3% of Normal cases, compared to 86.8% NPV for radiologists. CONCLUSION We show our algorithm can provide effective triage of ICH and Acute Infarct to prioritise acutely unwell patients. AI can also benefit clinical accuracy, with the algorithm identifying 91.3% of radiologist false negatives for ICH and 69.1% for Acute Infarct. Rule-out of Normal scans has huge potential for workload management in busy EDs, in this case removing 27.4% of all scans with no acute findings missed.
Collapse
Affiliation(s)
- Tom Dyer
- Behold.ai, 180 Borough high St, London, SE1 1LB, UK.
| | - Sanjiv Chawda
- Department of Radiology, Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, RM7 0AG, UK
| | - Raed Alkilani
- Department of Radiology, Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, RM7 0AG, UK
| | | | - Mike Hughes
- Behold.ai, 180 Borough high St, London, SE1 1LB, UK
| | | |
Collapse
|