1
|
Sogancioglu E, Ginneken BV, Behrendt F, Bengs M, Schlaefer A, Radu M, Xu D, Sheng K, Scalzo F, Marcus E, Papa S, Teuwen J, Scholten ET, Schalekamp S, Hendrix N, Jacobs C, Hendrix W, Sanchez CI, Murphy K. Nodule Detection and Generation on Chest X-Rays: NODE21 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2839-2853. [PMID: 38530714 DOI: 10.1109/tmi.2024.3382042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Pulmonary nodules may be an early manifestation of lung cancer, the leading cause of cancer-related deaths among both men and women. Numerous studies have established that deep learning methods can yield high-performance levels in the detection of lung nodules in chest X-rays. However, the lack of gold-standard public datasets slows down the progression of the research and prevents benchmarking of methods for this task. To address this, we organized a public research challenge, NODE21, aimed at the detection and generation of lung nodules in chest X-rays. While the detection track assesses state-of-the-art nodule detection systems, the generation track determines the utility of nodule generation algorithms to augment training data and hence improve the performance of the detection systems. This paper summarizes the results of the NODE21 challenge and performs extensive additional experiments to examine the impact of the synthetically generated nodule training images on the detection algorithm performance.
Collapse
|
2
|
Hanaoka S, Nomura Y, Yoshikawa T, Nakao T, Takenaga T, Matsuzaki H, Yamamichi N, Abe O. Detection of pulmonary nodules in chest radiographs: novel cost function for effective network training with purely synthesized datasets. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03227-7. [PMID: 39003437 DOI: 10.1007/s11548-024-03227-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 06/25/2024] [Indexed: 07/15/2024]
Abstract
PURPOSE Many large radiographic datasets of lung nodules are available, but the small and hard-to-detect nodules are rarely validated by computed tomography. Such difficult nodules are crucial for training nodule detection methods. This lack of difficult nodules for training can be addressed by artificial nodule synthesis algorithms, which can create artificially embedded nodules. This study aimed to develop and evaluate a novel cost function for training networks to detect such lesions. Embedding artificial lesions in healthy medical images is effective when positive cases are insufficient for network training. Although this approach provides both positive (lesion-embedded) images and the corresponding negative (lesion-free) images, no known methods effectively use these pairs for training. This paper presents a novel cost function for segmentation-based detection networks when positive-negative pairs are available. METHODS Based on the classic U-Net, new terms were added to the original Dice loss for reducing false positives and the contrastive learning of diseased regions in the image pairs. The experimental network was trained and evaluated, respectively, on 131,072 fully synthesized pairs of images simulating lung cancer and real chest X-ray images from the Japanese Society of Radiological Technology dataset. RESULTS The proposed method outperformed RetinaNet and a single-shot multibox detector. The sensitivities were 0.688 and 0.507 when the number of false positives per image was 0.2, respectively, with and without fine-tuning under the leave-one-case-out setting. CONCLUSION To our knowledge, this is the first study in which a method for detecting pulmonary nodules in chest X-ray images was evaluated on a real clinical dataset after being trained on fully synthesized images. The synthesized dataset is available at https://zenodo.org/records/10648433 .
Collapse
Affiliation(s)
- Shouhei Hanaoka
- Department of Radiology, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Yukihiro Nomura
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Japan
- Department of Computational Diagnostic Radiology and Preventive Medicine, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Tomomi Takenaga
- Department of Radiology, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Hirotaka Matsuzaki
- Center for Epidemiology and Preventive Medicine, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Nobutake Yamamichi
- Center for Epidemiology and Preventive Medicine, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
3
|
Kim JY, Ryu WS, Kim D, Kim EY. Better performance of deep learning pulmonary nodule detection using chest radiography with pixel level labels in reference to computed tomography: data quality matters. Sci Rep 2024; 14:15967. [PMID: 38987309 PMCID: PMC11237128 DOI: 10.1038/s41598-024-66530-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 07/02/2024] [Indexed: 07/12/2024] Open
Abstract
Labeling errors can significantly impact the performance of deep learning models used for screening chest radiographs. The deep learning model for detecting pulmonary nodules is particularly vulnerable to such errors, mainly because normal chest radiographs and those with nodules obscured by ribs appear similar. Thus, high-quality datasets referred to chest computed tomography (CT) are required to prevent the misclassification of nodular chest radiographs as normal. From this perspective, a deep learning strategy employing chest radiography data with pixel-level annotations referencing chest CT scans may improve nodule detection and localization compared to image-level labels. We trained models using a National Institute of Health chest radiograph-based labeling dataset and an AI-HUB CT-based labeling dataset, employing DenseNet architecture with squeeze-and-excitation blocks. We developed four models to assess whether CT versus chest radiography and pixel-level versus image-level labeling would improve the deep learning model's performance to detect nodules. The models' performance was evaluated using two external validation datasets. The AI-HUB dataset with image-level labeling outperformed the NIH dataset (AUC 0.88 vs 0.71 and 0.78 vs. 0.73 in two external datasets, respectively; both p < 0.001). However, the AI-HUB data annotated at the pixel level produced the best model (AUC 0.91 and 0.86 in external datasets), and in terms of nodule localization, it significantly outperformed models trained with image-level annotation data, with a Dice coefficient ranging from 0.36 to 0.58. Our findings underscore the importance of accurately labeled data in developing reliable deep learning algorithms for nodule detection in chest radiography.
Collapse
Affiliation(s)
- Jae Yong Kim
- Artificial Intelligence Research Center, JLK Inc., 5 Teheran-ro 33-gil, Seoul, Republic of Korea
| | - Wi-Sun Ryu
- Artificial Intelligence Research Center, JLK Inc., 5 Teheran-ro 33-gil, Seoul, Republic of Korea.
| | - Dongmin Kim
- Artificial Intelligence Research Center, JLK Inc., 5 Teheran-ro 33-gil, Seoul, Republic of Korea
| | - Eun Young Kim
- Department of Radiology, Incheon Sejong Hospital, 20, Gyeyangmunhwa-ro, Gyeyang-gu, Incheon, 21080, Republic of Korea.
| |
Collapse
|
4
|
Hwang EJ. [Clinical Application of Artificial Intelligence-Based Detection Assistance Devices for Chest X-Ray Interpretation: Current Status and Practical Considerations]. JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY 2024; 85:693-704. [PMID: 39130790 PMCID: PMC11310435 DOI: 10.3348/jksr.2024.0052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 06/14/2024] [Accepted: 07/04/2024] [Indexed: 08/13/2024]
Abstract
Artificial intelligence (AI) technology is actively being applied for the interpretation of medical imaging, such as chest X-rays. AI-based software medical devices, which automatically detect various types of abnormal findings in chest X-ray images to assist physicians in their interpretation, are actively being commercialized and clinically implemented in Korea. Several important issues need to be considered for AI-based detection assistant tools to be applied in clinical practice: the evaluation of performance and efficacy prior to implementation; the determination of the target application, range, and method of delivering results; and monitoring after implementation and legal liability issues. Appropriate decision making regarding these devices based on the situation in each institution is necessary. Radiologists must be engaged as medical assessment experts using the software for these devices as well as in medical image interpretation to ensure the safe and efficient implementation and operation of AI-based detection assistant tools.
Collapse
|
5
|
Wang M, Shu J, Wang Y, Zhang W, Zheng K, Zhou S, Yang D, Cui H. Ultrasensitive PD-L1-Expressing Exosome Immunosensors Based on a Chemiluminescent Nickel-Cobalt Hydroxide Nanoflower for Diagnosis and Classification of Lung Adenocarcinoma. ACS Sens 2024; 9:3444-3454. [PMID: 38847105 DOI: 10.1021/acssensors.4c00954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
Programmed death ligand-1 (PD-L1)-expressing exosomes are considered a potential marker for diagnosis and classification of lung adenocarcinoma (LUAD). There is an urgent need to develop highly sensitive and accurate chemiluminescence (CL) immunosensors for the detection of PD-L1-expressing exosomes. Herein, N-(4-aminobutyl)-N-ethylisopropanol-functionalized nickel-cobalt hydroxide (NiCo-DH-AA) with a hollow nanoflower structure as a highly efficient CL nanoprobe was synthesized using gold nanoparticles as a "bridge". The resulting NiCo-DH-AA exhibited a strong and stable CL emission, which was ascribed to the exceptional catalytic capability and large specific surface area of NiCo-DH, along with the capacity of AuNPs to facilitate free radical generation. On this basis, an ultrasensitive sandwich CL immunosensor for the detection of PD-L1-expressing exosomes was constructed by using PD-L1 antibody-modified NiCo-DH-AA as an effective signal probe and rabbit anti-CD63 protein polyclonal antibody-modified carboxylated magnetic bead as a capture platform. The immunosensor demonstrated outstanding analytical performance with a wide detection range of 4.75 × 103-4.75 × 108 particles/mL and a low detection limit of 7.76 × 102 particles/mL, which was over 2 orders of magnitude lower than the reported CL method for detecting PD-L1-expressing exosomes. Importantly, it was able to differentiate well not only between healthy persons and LUAD patients (100% specificity and 87.5% sensitivity) but also between patients with minimally invasive adenocarcinoma and invasive adenocarcinoma (92.3% specificity and 52.6% sensitivity). Therefore, this study not only presents an ultrasensitive and accurate diagnostic method for LUAD but also offers a novel, simple, and noninvasive approach for the classification of LUAD.
Collapse
Affiliation(s)
- Manli Wang
- Key Laboratory of Precision and Intelligent Chemistry, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Jiangnan Shu
- Key Laboratory of Precision and Intelligent Chemistry, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Yisha Wang
- Key Laboratory of Precision and Intelligent Chemistry, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Wencan Zhang
- Key Laboratory of Precision and Intelligent Chemistry, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Keying Zheng
- Key Laboratory of Precision and Intelligent Chemistry, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Shengnian Zhou
- The Second Department of Thoracic Surgery, Anhui Chest Hospital, Hefei, Anhui 230022, China
| | - Dongliang Yang
- The Second Department of Thoracic Surgery, Anhui Chest Hospital, Hefei, Anhui 230022, China
| | - Hua Cui
- Key Laboratory of Precision and Intelligent Chemistry, University of Science and Technology of China, Hefei, Anhui 230026, China
| |
Collapse
|
6
|
Zhang Y, Zheng B, Zeng F, Cheng X, Wu T, Peng Y, Zhang Y, Xie Y, Yi W, Chen W, Wu J, Li L. Potential of digital chest radiography-based deep learning in screening and diagnosing pneumoconiosis: An observational study. Medicine (Baltimore) 2024; 103:e38478. [PMID: 38905434 PMCID: PMC11191863 DOI: 10.1097/md.0000000000038478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 05/16/2024] [Indexed: 06/23/2024] Open
Abstract
The diagnosis of pneumoconiosis is complex and subjective, leading to inevitable variability in readings. This is especially true for inexperienced doctors. To improve accuracy, a computer-assisted diagnosis system is used for more effective pneumoconiosis diagnoses. Three models (Resnet50, Resnet101, and DenseNet) were used for pneumoconiosis classification based on 1250 chest X-ray images. Three experienced and highly qualified physicians read the collected digital radiography images and classified them from category 0 to category III in a double-blinded manner. The results of the 3 physicians in agreement were considered the relative gold standards. Subsequently, 3 models were used to train and test these images and their performance was evaluated using multi-class classification metrics. We used kappa values and accuracy to evaluate the consistency and reliability of the optimal model with clinical typing. The results showed that ResNet101 was the optimal model among the 3 convolutional neural networks. The AUC of ResNet101 was 1.0, 0.9, 0.89, and 0.94 for detecting pneumoconiosis categories 0, I, II, and III, respectively. The micro-average and macro-average mean AUC values were 0.93 and 0.94, respectively. The accuracy and Kappa values of ResNet101 were 0.72 and 0.7111 for quadruple classification and 0.98 and 0.955 for dichotomous classification, respectively, compared with the relative standard classification of the clinic. This study develops a deep learning based model for screening and staging of pneumoconiosis is using chest radiographs. The ResNet101 model performed relatively better in classifying pneumoconiosis than radiologists. The dichotomous classification displayed outstanding performance, thereby indicating the feasibility of deep learning techniques in pneumoconiosis screening.
Collapse
Affiliation(s)
- Yajuan Zhang
- Department of Radiology, Guangzhou Twelfth People’s Hospital, Guangzhou, China
| | - Bowen Zheng
- Department of Radiology, Nan fang Hospital, Southern Medical University, Guangzhou, China
| | - Fengxia Zeng
- Department of Radiology, Nan fang Hospital, Southern Medical University, Guangzhou, China
| | - Xiaoke Cheng
- Department of Radiology, Guangzhou Twelfth People’s Hospital, Guangzhou, China
| | - Tianqiong Wu
- Department of Radiology, Guangzhou Twelfth People’s Hospital, Guangzhou, China
| | - Yuli Peng
- Department of Radiology, Guangzhou Twelfth People’s Hospital, Guangzhou, China
| | - Yonliang Zhang
- Department of Radiology, Guangzhou Twelfth People’s Hospital, Guangzhou, China
| | - Yuanlin Xie
- Department of Radiology, San shui District Institute for Disease Control and Prevention, Foshan Guangdong, China
| | - Wei Yi
- Department of Radiology, The Third People’s Hospital of Yunnan Province, Yunnan, China
| | - Weiguo Chen
- Department of Radiology, Nan fang Hospital, Southern Medical University, Guangzhou, China
| | - Jiefang Wu
- Department of Radiology, Nan fang Hospital, Southern Medical University, Guangzhou, China
| | - Long Li
- Department of Radiology, Guangzhou Twelfth People’s Hospital, Guangzhou, China
| |
Collapse
|
7
|
Baniasadi A, Das JP, Prendergast CM, Beizavi Z, Ma HY, Jaber MY, Capaccione KM. Imaging at the nexus: how state of the art imaging techniques can enhance our understanding of cancer and fibrosis. J Transl Med 2024; 22:567. [PMID: 38872212 PMCID: PMC11177383 DOI: 10.1186/s12967-024-05379-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Accepted: 06/06/2024] [Indexed: 06/15/2024] Open
Abstract
Both cancer and fibrosis are diseases involving dysregulation of cell signaling pathways resulting in an altered cellular microenvironment which ultimately leads to progression of the condition. The two disease entities share common molecular pathophysiology and recent research has illuminated the how each promotes the other. Multiple imaging techniques have been developed to aid in the early and accurate diagnosis of each disease, and given the commonalities between the pathophysiology of the conditions, advances in imaging one disease have opened new avenues to study the other. Here, we detail the most up-to-date advances in imaging techniques for each disease and how they have crossed over to improve detection and monitoring of the other. We explore techniques in positron emission tomography (PET), magnetic resonance imaging (MRI), second generation harmonic Imaging (SGHI), ultrasound (US), radiomics, and artificial intelligence (AI). A new diagnostic imaging tool in PET/computed tomography (CT) is the use of radiolabeled fibroblast activation protein inhibitor (FAPI). SGHI uses high-frequency sound waves to penetrate deeper into the tissue, providing a more detailed view of the tumor microenvironment. Artificial intelligence with the aid of advanced deep learning (DL) algorithms has been highly effective in training computer systems to diagnose and classify neoplastic lesions in multiple organs. Ultimately, advancing imaging techniques in cancer and fibrosis can lead to significantly more timely and accurate diagnoses of both diseases resulting in better patient outcomes.
Collapse
Affiliation(s)
- Alireza Baniasadi
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168Th Street, New York, NY, 10032, USA.
| | - Jeeban P Das
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Conor M Prendergast
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168Th Street, New York, NY, 10032, USA
| | - Zahra Beizavi
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168Th Street, New York, NY, 10032, USA
| | - Hong Y Ma
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168Th Street, New York, NY, 10032, USA
| | | | - Kathleen M Capaccione
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168Th Street, New York, NY, 10032, USA
| |
Collapse
|
8
|
Too CW, Fong KY, Hang G, Sato T, Nyam CQ, Leong SH, Ng KW, Ng WL, Kawai T. Artificial Intelligence-Guided Segmentation and Path Planning Software for Transthoracic Lung Biopsy. J Vasc Interv Radiol 2024; 35:780-789.e1. [PMID: 38355040 DOI: 10.1016/j.jvir.2024.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 01/21/2024] [Accepted: 02/04/2024] [Indexed: 02/16/2024] Open
Abstract
PURPOSE To validate the sensitivity and specificity of a 3-dimensional (3D) convolutional neural network (CNN) artificial intelligence (AI) software for lung lesion detection and to establish concordance between AI-generated needle paths and those used in actual biopsy procedures. MATERIALS AND METHODS This was a retrospective study using computed tomography (CT) scans from 3 hospitals. Inclusion criteria were scans with 1-5 nodules of diameter ≥5 mm; exclusion criteria were poor-quality scans or those with nodules measuring <5mm in diameter. In the lesion detection phase, 2,147 nodules from 219 scans were used to develop and train the deep learning 3D-CNN to detect lesions. The 3D-CNN was validated with 235 scans (354 lesions) for sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) analysis. In the path planning phase, Bayesian optimization was used to propose possible needle trajectories for lesion biopsy while avoiding vital structures. Software-proposed needle trajectories were compared with actual biopsy path trajectories from intraprocedural CT scans in 150 patients, with a match defined as an angular deviation of <5° between the 2 trajectories. RESULTS The model achieved an overall AUC of 97.4% (95% CI, 96.3%-98.2%) for lesion detection, with mean sensitivity of 93.5% and mean specificity of 93.2%. Among the software-proposed needle trajectories, 85.3% were feasible, with 82% matching actual paths and similar performance between supine and prone/oblique patient orientations (P = .311). The mean angular deviation between matching trajectories was 2.30° (SD ± 1.22); the mean path deviation was 2.94 mm (SD ± 1.60). CONCLUSIONS Segmentation, lesion detection, and path planning for CT-guided lung biopsy using an AI-guided software showed promising results. Future integration with automated robotic systems may pave the way toward fully automated biopsy procedures.
Collapse
Affiliation(s)
- Chow Wei Too
- Department of Vascular and Interventional Radiology, Singapore General Hospital, Singapore, Singapore; Division of Radiological Sciences, Singapore General Hospital, Singapore, Singapore; Radiological Sciences Academic Clinical Program, SingHealth Duke-NUS Academic Medical Centre, Singapore, Singapore.
| | - Khi Yung Fong
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Guanqi Hang
- Department of Vascular and Interventional Radiology, Singapore General Hospital, Singapore, Singapore
| | - Takafumi Sato
- Department of Radiology, Nagoya City University East Medical Center, Nagoya, Japan
| | | | | | - Ka Wei Ng
- NDR Medical Technology, Singapore, Singapore
| | - Wei Lin Ng
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Tatsuya Kawai
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, Nagoya, Japan
| |
Collapse
|
9
|
Yim D, Khuntia J, Parameswaran V, Meyers A. Preliminary Evidence of the Use of Generative AI in Health Care Clinical Services: Systematic Narrative Review. JMIR Med Inform 2024; 12:e52073. [PMID: 38506918 PMCID: PMC10993141 DOI: 10.2196/52073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/12/2023] [Accepted: 01/30/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Generative artificial intelligence tools and applications (GenAI) are being increasingly used in health care. Physicians, specialists, and other providers have started primarily using GenAI as an aid or tool to gather knowledge, provide information, train, or generate suggestive dialogue between physicians and patients or between physicians and patients' families or friends. However, unless the use of GenAI is oriented to be helpful in clinical service encounters that can improve the accuracy of diagnosis, treatment, and patient outcomes, the expected potential will not be achieved. As adoption continues, it is essential to validate the effectiveness of the infusion of GenAI as an intelligent technology in service encounters to understand the gap in actual clinical service use of GenAI. OBJECTIVE This study synthesizes preliminary evidence on how GenAI assists, guides, and automates clinical service rendering and encounters in health care The review scope was limited to articles published in peer-reviewed medical journals. METHODS We screened and selected 0.38% (161/42,459) of articles published between January 1, 2020, and May 31, 2023, identified from PubMed. We followed the protocols outlined in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines to select highly relevant studies with at least 1 element on clinical use, evaluation, and validation to provide evidence of GenAI use in clinical services. The articles were classified based on their relevance to clinical service functions or activities using the descriptive and analytical information presented in the articles. RESULTS Of 161 articles, 141 (87.6%) reported using GenAI to assist services through knowledge access, collation, and filtering. GenAI was used for disease detection (19/161, 11.8%), diagnosis (14/161, 8.7%), and screening processes (12/161, 7.5%) in the areas of radiology (17/161, 10.6%), cardiology (12/161, 7.5%), gastrointestinal medicine (4/161, 2.5%), and diabetes (6/161, 3.7%). The literature synthesis in this study suggests that GenAI is mainly used for diagnostic processes, improvement of diagnosis accuracy, and screening and diagnostic purposes using knowledge access. Although this solves the problem of knowledge access and may improve diagnostic accuracy, it is oriented toward higher value creation in health care. CONCLUSIONS GenAI informs rather than assisting or automating clinical service functions in health care. There is potential in clinical service, but it has yet to be actualized for GenAI. More clinical service-level evidence that GenAI is used to streamline some functions or provides more automated help than only information retrieval is needed. To transform health care as purported, more studies related to GenAI applications must automate and guide human-performed services and keep up with the optimism that forward-thinking health care organizations will take advantage of GenAI.
Collapse
Affiliation(s)
- Dobin Yim
- Loyola University, Maryland, MD, United States
| | - Jiban Khuntia
- University of Colorado Denver, Denver, CO, United States
| | | | - Arlen Meyers
- University of Colorado Denver, Denver, CO, United States
| |
Collapse
|
10
|
Liu C, Wu Z, Wang B, Zhu M. Pulmonary nodule detection in x-ray images by feature augmentation and context aggregation. Phys Med Biol 2024; 69:045002. [PMID: 38237183 DOI: 10.1088/1361-6560/ad2013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 01/18/2024] [Indexed: 02/06/2024]
Abstract
Recent developments in x-ray image based pulmonary nodule detection have achieved remarkable results. However, existing methods are focused on transferring off-the-shelf coarse-grained classification models and fine-grained detection models rather than developing a dedicated framework optimized for nodule detection. In this paper, we propose PN-DetX, which as we know is the first dedicated pulmonary nodule detection framework. PN-DetX incorporates feature fusion and self-attention into x-ray based pulmonary nodule detection tasks, achieving improved detection performance. Specifically, PN-DetX adopts CSPDarknet backbone to extract features, and utilizes feature augmentation module to fuse features from different levels followed by context aggregation module to aggregate semantic information. To evaluate the efficacy of our method, we collect aLArge-scalePulmonaryNOduleDetection dataset,LAPNOD, comprising 2954 x-ray images along with expert-annotated ground truths. As we know, this is the first large-scale chest x-ray pulmonary nodule detection dataset. Experiments demonstrates that our method outperforms baseline by 3.8% mAP and 5.1%AP0.5. The generality of our approach is also evaluated on the publicly available dataset NODE21. We aspire for our method to serve as an inspiration for future research in the field of pulmonary nodule detection. The dataset and codes will be made in public.
Collapse
Affiliation(s)
- Chenglin Liu
- Department of Automation, University of Science and Technology of China, Hefei, People's Republic of China
| | - Zhi Wu
- School of Cyber Science and Technology, University of Science and Technology of China, Hefei, People's Republic of China
| | - Binquan Wang
- School of Cyber Science and Technology, University of Science and Technology of China, Hefei, People's Republic of China
| | - Ming Zhu
- Department of Automation, University of Science and Technology of China, Hefei, People's Republic of China
| |
Collapse
|
11
|
Grenier PA, Brun AL, Mellot F. [The contribution of artificial intelligence (AI) subsequent to the processing of thoracic imaging]. Rev Mal Respir 2024; 41:110-126. [PMID: 38129269 DOI: 10.1016/j.rmr.2023.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 11/27/2023] [Indexed: 12/23/2023]
Abstract
The contribution of artificial intelligence (AI) to medical imaging is currently the object of widespread experimentation. The development of deep learning (DL) methods, particularly convolution neural networks (CNNs), has led to performance gains often superior to those achieved by conventional methods such as machine learning. Radiomics is an approach aimed at extracting quantitative data not accessible to the human eye from images expressing a disease. The data subsequently feed machine learning models and produce diagnostic or prognostic probabilities. As for the multiple applications of AI methods in thoracic imaging, they are undergoing evaluation. Chest radiography is a practically ideal field for the development of DL algorithms able to automatically interpret X-rays. Current algorithms can detect up to 14 different abnormalities present either in isolation or in combination. Chest CT is another area offering numerous AI applications. Various algorithms have been specifically formed and validated for the detection and characterization of pulmonary nodules and pulmonary embolism, as well as segmentation and quantitative analysis of the extent of diffuse lung diseases (emphysema, infectious pneumonias, interstitial lung disease). In addition, the analysis of medical images can be associated with clinical, biological, and functional data (multi-omics analysis), the objective being to construct predictive approaches regarding disease prognosis and response to treatment.
Collapse
Affiliation(s)
- P A Grenier
- Délégation à la recherche clinique et l'innovation, hôpital Foch, Suresnes, France.
| | - A L Brun
- Service de radiologie, hôpital Foch, Suresnes, France
| | - F Mellot
- Service de radiologie, hôpital Foch, Suresnes, France
| |
Collapse
|
12
|
Hamanaka R, Oda M. Can Artificial Intelligence Replace Humans for Detecting Lung Tumors on Radiographs? An Examination of Resected Malignant Lung Tumors. J Pers Med 2024; 14:164. [PMID: 38392597 PMCID: PMC10890665 DOI: 10.3390/jpm14020164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 01/18/2024] [Accepted: 01/29/2024] [Indexed: 02/24/2024] Open
Abstract
OBJECTIVE Although lung cancer screening trials have showed the efficacy of computed tomography to decrease mortality compared with chest radiography, the two are widely taken as different kinds of clinical practices. Artificial intelligence can improve outcomes by detecting lung tumors in chest radiographs. Currently, artificial intelligence is used as an aid for physicians to interpret radiograms, but with the future evolution of artificial intelligence, it may become a modality that replaces physicians. Therefore, in this study, we investigated the current situation of lung cancer diagnosis by artificial intelligence. METHODS In total, we recruited 174 consecutive patients with malignant pulmonary tumors who underwent surgery after chest radiography that was checked by artificial intelligence before surgery. Artificial intelligence diagnoses were performed using the medical image analysis software EIRL X-ray Lung Nodule version 1.12, (LPIXEL Inc., Tokyo, Japan). RESULTS The artificial intelligence determined pulmonary tumors in 90 cases (51.7% for all patients and 57.7% excluding 18 patients with adenocarcinoma in situ). There was no significant difference in the detection rate by the artificial intelligence among histological types. All eighteen cases of adenocarcinoma in situ were not detected by either the artificial intelligence or the physicians. In a univariate analysis, the artificial intelligence could detect cases with larger histopathological tumor size (p < 0.0001), larger histopathological invasion size (p < 0.0001), and higher maximum standardized uptake values of positron emission tomography-computed tomography (p < 0.0001). In a multivariate analysis, detection by AI was significantly higher in cases with a large histopathological invasive size (p = 0.006). In 156 cases excluding adenocarcinoma in situ, we examined the rate of artificial intelligence detection based on the tumor site. Tumors in the lower lung field area were less frequently detected (p = 0.019) and tumors in the middle lung field area were more frequently detected (p = 0.014) compared with tumors in the upper lung field area. CONCLUSIONS Our study showed that using artificial intelligence, the diagnosis of tumor-associated findings and the diagnosis of areas that overlap with anatomical structures is not satisfactory. While the current standing of artificial intelligence diagnostics is to assist physicians in making diagnoses, there is the possibility that artificial intelligence can substitute for humans in the future. However, artificial intelligence should be used in the future as an enhancement, to aid physicians in the role of a radiologist in the workflow.
Collapse
Affiliation(s)
- Rurika Hamanaka
- Department of Thoracic Surgery, Shin-Yurigaoka General Hospital, 255 Furusawa Asao-ku, Kawasaki 215-0026, Japan
| | - Makoto Oda
- Department of Thoracic Surgery, Shin-Yurigaoka General Hospital, 255 Furusawa Asao-ku, Kawasaki 215-0026, Japan
| |
Collapse
|
13
|
Hartoonian S, Hosseini M, Yousefi I, Mahdian M, Ghazizadeh Ahsaie M. Applications of artificial intelligence in dentomaxillofacial imaging-a systematic review. Oral Surg Oral Med Oral Pathol Oral Radiol 2024:S2212-4403(23)01566-3. [PMID: 38637235 DOI: 10.1016/j.oooo.2023.12.790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/02/2023] [Accepted: 12/22/2023] [Indexed: 04/20/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has been increasingly developed in oral and maxillofacial imaging. The aim of this systematic review was to assess the applications and performance of the developed algorithms in different dentomaxillofacial imaging modalities. STUDY DESIGN A systematic search of PubMed and Scopus databases was performed. The search strategy was set as a combination of the following keywords: "Artificial Intelligence," "Machine Learning," "Deep Learning," "Neural Networks," "Head and Neck Imaging," and "Maxillofacial Imaging." Full-text screening and data extraction were independently conducted by two independent reviewers; any mismatch was resolved by discussion. The risk of bias was assessed by one reviewer and validated by another. RESULTS The search returned a total of 3,392 articles. After careful evaluation of the titles, abstracts, and full texts, a total number of 194 articles were included. Most studies focused on AI applications for tooth and implant classification and identification, 3-dimensional cephalometric landmark detection, lesion detection (periapical, jaws, and bone), and osteoporosis detection. CONCLUSION Despite the AI models' limitations, they showed promising results. Further studies are needed to explore specific applications and real-world scenarios before confidently integrating these models into dental practice.
Collapse
Affiliation(s)
- Serlie Hartoonian
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Matine Hosseini
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Iman Yousefi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Mitra Ghazizadeh Ahsaie
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
14
|
Kirshenboim Z, Gilat EK, Carl L, Bekker E, Tau N, Klug M, Konen E, Marom EM. Retrospectively assessing evaluation and management of artificial-intelligence detected nodules on uninterpreted chest radiographs in the era of radiologists shortage. Eur J Radiol 2024; 170:111241. [PMID: 38042019 DOI: 10.1016/j.ejrad.2023.111241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 11/17/2023] [Accepted: 11/26/2023] [Indexed: 12/04/2023]
Abstract
PURPOSE High volumes of chest radiographs (CXR) remain uninterpreted due to severe shortage of radiologists. These CXRs may be informally reported by non-radiologist physicians, or not reviewed at all. Artificial intelligence (AI) software can aid lung nodule detection. Our aim was to assess evaluation and management by non-radiologists of uninterpreted CXRs with AI detected nodules, compared to retrospective radiology reports. MATERIALS AND METHODS AI detected nodules on uninterpreted CXRs of adults, performed 30/6/2022-31/1/2023, were evaluated. Excluded were patients with known active malignancy and duplicate CXRs of the same patient. The electronic medical records (EMR) were reviewed, and the clinicians' notes on the CXR and AI detected nodule were documented. Dedicated thoracic radiologists retrospectively interpreted all CXRs, and similarly to the clinicians, they had access to the AI findings, prior imaging and EMR. The radiologists' interpretation served as the ground truth, and determined if the AI-detected nodule was a true lung nodule and if further workup was required. RESULTS A total of 683 patients met the inclusion criteria. The clinicians commented on 386 (56.5%) CXRs, identified true nodules on 113 CXRs (16.5%), incorrectly mentioned 31 (4.5%) false nodules as real nodules, and did not mention the AI detected nodule on 242 (35%) CXRs, of which 68 (10%) patients were retrospectively referred for further workup by the radiologist. For 297 patients (43.5%) there were no comments regarding the CXR in the EMR. Of these, 77 nodules (11.3%) were retrospectively referred for further workup by the radiologist. CONCLUSION AI software for lung nodule detection may be insufficient without a formal radiology report, and may lead to over diagnosis or misdiagnosis of nodules.
Collapse
Affiliation(s)
- Zehavit Kirshenboim
- Division of Diagnostic Radiology, Sheba Medical Center, Ramat Gan, Israel; Faculty of Medicine, Tel Aviv University, Israel.
| | - Efrat Keren Gilat
- Division of Diagnostic Radiology, Sheba Medical Center, Ramat Gan, Israel; Faculty of Medicine, Tel Aviv University, Israel.
| | - Lawrence Carl
- Division of Diagnostic Radiology, Sheba Medical Center, Ramat Gan, Israel; Faculty of Medicine, Tel Aviv University, Israel.
| | - Elena Bekker
- Division of Diagnostic Radiology, Sheba Medical Center, Ramat Gan, Israel; Faculty of Medicine, Tel Aviv University, Israel.
| | - Noam Tau
- Division of Diagnostic Radiology, Sheba Medical Center, Ramat Gan, Israel; Faculty of Medicine, Tel Aviv University, Israel.
| | - Maximiliano Klug
- Division of Diagnostic Radiology, Sheba Medical Center, Ramat Gan, Israel; Faculty of Medicine, Tel Aviv University, Israel.
| | - Eli Konen
- Division of Diagnostic Radiology, Sheba Medical Center, Ramat Gan, Israel; Faculty of Medicine, Tel Aviv University, Israel.
| | - Edith Michelle Marom
- Division of Diagnostic Radiology, Sheba Medical Center, Ramat Gan, Israel; Faculty of Medicine, Tel Aviv University, Israel.
| |
Collapse
|
15
|
Gefter WB, Prokop M, Seo JB, Raoof S, Langlotz CP, Hatabu H. Human-AI Symbiosis: A Path Forward to Improve Chest Radiography and the Role of Radiologists in Patient Care. Radiology 2024; 310:e232778. [PMID: 38259206 PMCID: PMC10831473 DOI: 10.1148/radiol.232778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/08/2023] [Accepted: 12/18/2023] [Indexed: 01/24/2024]
Affiliation(s)
- Warren B. Gefter
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Mathias Prokop
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Joon Beom Seo
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Suhail Raoof
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Curtis P. Langlotz
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Hiroto Hatabu
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| |
Collapse
|
16
|
Zhang L, Shao Y, Chen G, Tian S, Zhang Q, Wu J, Bai C, Yang D. An artificial intelligence-assisted diagnostic system for the prediction of benignity and malignancy of pulmonary nodules and its practical value for patients with different clinical characteristics. Front Med (Lausanne) 2023; 10:1286433. [PMID: 38196835 PMCID: PMC10774219 DOI: 10.3389/fmed.2023.1286433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/12/2023] [Indexed: 01/11/2024] Open
Abstract
Objectives This study aimed to explore the value of an artificial intelligence (AI)-assisted diagnostic system in the prediction of pulmonary nodules. Methods The AI system was able to make predictions of benign or malignant nodules. 260 cases of solitary pulmonary nodules (SPNs) were divided into 173 malignant cases and 87 benign cases based on the surgical pathological diagnosis. A stratified data analysis was applied to compare the diagnostic effectiveness of the AI system to distinguish between the subgroups with different clinical characteristics. Results The accuracy of AI system in judging benignity and malignancy of the nodules was 75.77% (p < 0.05). We created an ROC curve by calculating the true positive rate (TPR) and the false positive rate (FPR) at different threshold values, and the AUC was 0.755. Results of the stratified analysis were as follows. (1) By nodule position: the AUC was 0.677, 0.758, 0.744, 0.982, and 0.725, respectively, for the nodules in the left upper lobe, left lower lobe, right upper lobe, right middle lobe, and right lower lobe. (2) By nodule size: the AUC was 0.778, 0.771, and 0.686, respectively, for the nodules measuring 5-10, 10-20, and 20-30 mm in diameter. (3) The predictive accuracy was higher for the subsolid pulmonary nodules than for the solid ones (80.54 vs. 66.67%). Conclusion The AI system can be applied to assist in the prediction of benign and malignant pulmonary nodules. It can provide a valuable reference, especially for the diagnosis of subsolid nodules and small nodules measuring 5-10 mm in diameter.
Collapse
Affiliation(s)
- Lichuan Zhang
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Yue Shao
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Guangmei Chen
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Simiao Tian
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Qing Zhang
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Jianlin Wu
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Chunxue Bai
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital Fudan University, Shanghai, China
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China
- Shanghai Respiratory Research Institution, Shanghai, China
| | - Dawei Yang
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital Fudan University, Shanghai, China
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China
- Shanghai Respiratory Research Institution, Shanghai, China
| |
Collapse
|
17
|
Zhang C, Xu J, Tang R, Yang J, Wang W, Yu X, Shi S. Novel research and future prospects of artificial intelligence in cancer diagnosis and treatment. J Hematol Oncol 2023; 16:114. [PMID: 38012673 PMCID: PMC10680201 DOI: 10.1186/s13045-023-01514-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 11/20/2023] [Indexed: 11/29/2023] Open
Abstract
Research into the potential benefits of artificial intelligence for comprehending the intricate biology of cancer has grown as a result of the widespread use of deep learning and machine learning in the healthcare sector and the availability of highly specialized cancer datasets. Here, we review new artificial intelligence approaches and how they are being used in oncology. We describe how artificial intelligence might be used in the detection, prognosis, and administration of cancer treatments and introduce the use of the latest large language models such as ChatGPT in oncology clinics. We highlight artificial intelligence applications for omics data types, and we offer perspectives on how the various data types might be combined to create decision-support tools. We also evaluate the present constraints and challenges to applying artificial intelligence in precision oncology. Finally, we discuss how current challenges may be surmounted to make artificial intelligence useful in clinical settings in the future.
Collapse
Affiliation(s)
- Chaoyi Zhang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jin Xu
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Rong Tang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jianhui Yang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Wei Wang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Xianjun Yu
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China.
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China.
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China.
| | - Si Shi
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China.
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China.
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China.
| |
Collapse
|
18
|
Hwang SH, Shin HJ, Kim EK, Lee EH, Lee M. Clinical outcomes and actual consequence of lung nodules incidentally detected on chest radiographs by artificial intelligence. Sci Rep 2023; 13:19732. [PMID: 37957283 PMCID: PMC10643548 DOI: 10.1038/s41598-023-47194-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 11/10/2023] [Indexed: 11/15/2023] Open
Abstract
This study evaluated how often clinically significant lung nodules were detected unexpectedly on chest radiographs (CXR) by artificial intelligence (AI)-based detection software, and whether co-existing findings can aid in differential diagnosis of lung nodules. Patients (> 18 years old) with AI-detected lung nodules at their first visit from March 2021 to February 2022, except for those in the pulmonology or thoracic surgery departments, were retrospectively included. Three radiologists categorized nodules into malignancy, active inflammation, post-inflammatory sequelae, or "other" groups. Characteristics of the nodule and abnormality scores of co-existing lung lesions were compared. Approximately 1% of patients (152/14,563) had unexpected lung nodules. Among 73 patients with follow-up exams, 69.9% had true positive nodules. Increased abnormality scores for nodules were significantly associated with malignancy (odds ratio [OR] 1.076, P = 0.001). Increased abnormality scores for consolidation (OR 1.033, P = 0.040) and pleural effusion (OR 1.025, P = 0.041) were significantly correlated with active inflammation-type nodules. Abnormality scores for fibrosis (OR 1.036, P = 0.013) and nodules (OR 0.940, P = 0.001) were significantly associated with post-inflammatory sequelae categorization. AI-based lesion-detection software of CXRs in daily practice can help identify clinically significant incidental lung nodules, and referring accompanying lung lesions may help classify the nodule.
Collapse
Affiliation(s)
- Shin Hye Hwang
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea
| | - Hyun Joo Shin
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi‑do, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea
| | - Eun Hye Lee
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi‑do, Republic of Korea
- Division of Pulmonology, Allergy and Critical Care Medicine, Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
| | - Minwook Lee
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin-si, Gyeonggi-do, 16995, Republic of Korea.
| |
Collapse
|
19
|
Maiter A, Hocking K, Matthews S, Taylor J, Sharkey M, Metherall P, Alabed S, Dwivedi K, Shahin Y, Anderson E, Holt S, Rowbotham C, Kamil MA, Hoggard N, Balasubramanian SP, Swift A, Johns CS. Evaluating the performance of artificial intelligence software for lung nodule detection on chest radiographs in a retrospective real-world UK population. BMJ Open 2023; 13:e077348. [PMID: 37940155 PMCID: PMC10632826 DOI: 10.1136/bmjopen-2023-077348] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 10/16/2023] [Indexed: 11/10/2023] Open
Abstract
OBJECTIVES Early identification of lung cancer on chest radiographs improves patient outcomes. Artificial intelligence (AI) tools may increase diagnostic accuracy and streamline this pathway. This study evaluated the performance of commercially available AI-based software trained to identify cancerous lung nodules on chest radiographs. DESIGN This retrospective study included primary care chest radiographs acquired in a UK centre. The software evaluated each radiograph independently and outputs were compared with two reference standards: (1) the radiologist report and (2) the diagnosis of cancer by multidisciplinary team decision. Failure analysis was performed by interrogating the software marker locations on radiographs. PARTICIPANTS 5722 consecutive chest radiographs were included from 5592 patients (median age 59 years, 53.8% women, 1.6% prevalence of cancer). RESULTS Compared with radiologist reports for nodule detection, the software demonstrated sensitivity 54.5% (95% CI 44.2% to 64.4%), specificity 83.2% (82.2% to 84.1%), positive predictive value (PPV) 5.5% (4.6% to 6.6%) and negative predictive value (NPV) 99.0% (98.8% to 99.2%). Compared with cancer diagnosis, the software demonstrated sensitivity 60.9% (50.1% to 70.9%), specificity 83.3% (82.3% to 84.2%), PPV 5.6% (4.8% to 6.6%) and NPV 99.2% (99.0% to 99.4%). Normal or variant anatomy was misidentified as an abnormality in 69.9% of the 943 false positive cases. CONCLUSIONS The software demonstrated considerable underperformance in this real-world patient cohort. Failure analysis suggested a lack of generalisability in the training and testing datasets as a potential factor. The low PPV carries the risk of over-investigation and limits the translation of the software to clinical practice. Our findings highlight the importance of training and testing software in representative datasets, with broader implications for the implementation of AI tools in imaging.
Collapse
Affiliation(s)
- Ahmed Maiter
- School of Medicine and Population Health, The University of Sheffield, Sheffield, UK
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Katherine Hocking
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Suzanne Matthews
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
- Medical Imaging and Medical Physics, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Jonathan Taylor
- Medical Imaging and Medical Physics, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Michael Sharkey
- Medical Imaging and Medical Physics, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Peter Metherall
- Medical Imaging and Medical Physics, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Samer Alabed
- School of Medicine and Population Health, The University of Sheffield, Sheffield, UK
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Krit Dwivedi
- School of Medicine and Population Health, The University of Sheffield, Sheffield, UK
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Yousef Shahin
- School of Medicine and Population Health, The University of Sheffield, Sheffield, UK
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Elizabeth Anderson
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Sarah Holt
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | | | - Mohamed A Kamil
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Nigel Hoggard
- School of Medicine and Population Health, The University of Sheffield, Sheffield, UK
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
- NIHR Sheffield Biomedical Research Centre, Sheffield, UK
| | - Saba P Balasubramanian
- Medical Imaging and Medical Physics, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
- Surgical directorate, Sheffield Teaching Hospitals Foundation NHS Trust, Sheffield, UK
| | - Andrew Swift
- School of Medicine and Population Health, The University of Sheffield, Sheffield, UK
- Radiology, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
- NIHR Sheffield Biomedical Research Centre, Sheffield, UK
| | | |
Collapse
|
20
|
Chassagnon G, Billet N, Rutten C, Toussaint T, Cassius de Linval Q, Collin M, Lemouchi L, Homps M, Hedjoudje M, Ventre J, Gregory J, Canniff E, Regnard NE, Bennani S, Revel MP. Learning from the machine: AI assistance is not an effective learning tool for resident education in chest x-ray interpretation. Eur Radiol 2023; 33:8241-8250. [PMID: 37572190 DOI: 10.1007/s00330-023-10043-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 05/29/2023] [Accepted: 06/20/2023] [Indexed: 08/14/2023]
Abstract
OBJECTIVES To assess whether a computer-aided detection (CADe) system could serve as a learning tool for radiology residents in chest X-ray (CXR) interpretation. METHODS Eight radiology residents were asked to interpret 500 CXRs for the detection of five abnormalities, namely pneumothorax, pleural effusion, alveolar syndrome, lung nodule, and mediastinal mass. After interpreting 150 CXRs, the residents were divided into 2 groups of equivalent performance and experience. Subsequently, group 1 interpreted 200 CXRs from the "intervention dataset" using a CADe as a second reader, while group 2 served as a control by interpreting the same CXRs without the use of CADe. Finally, the 2 groups interpreted another 150 CXRs without the use of CADe. The sensitivity, specificity, and accuracy before, during, and after the intervention were compared. RESULTS Before the intervention, the median individual sensitivity, specificity, and accuracy of the eight radiology residents were 43% (range: 35-57%), 90% (range: 82-96%), and 81% (range: 76-84%), respectively. With the use of CADe, residents from group 1 had a significantly higher overall sensitivity (53% [n = 431/816] vs 43% [n = 349/816], p < 0.001), specificity (94% [i = 3206/3428] vs 90% [n = 3127/3477], p < 0.001), and accuracy (86% [n = 3637/4244] vs 81% [n = 3476/4293], p < 0.001), compared to the control group. After the intervention, there were no significant differences between group 1 and group 2 regarding the overall sensitivity (44% [n = 309/696] vs 46% [n = 317/696], p = 0.666), specificity (90% [n = 2294/2541] vs 90% [n = 2285/2542], p = 0.642), or accuracy (80% [n = 2603/3237] vs 80% [n = 2602/3238], p = 0.955). CONCLUSIONS Although it improves radiology residents' performances for interpreting CXRs, a CADe system alone did not appear to be an effective learning tool and should not replace teaching. CLINICAL RELEVANCE STATEMENT Although the use of artificial intelligence improves radiology residents' performance in chest X-rays interpretation, artificial intelligence cannot be used alone as a learning tool and should not replace dedicated teaching. KEY POINTS • With CADe as a second reader, residents had a significantly higher sensitivity (53% vs 43%, p < 0.001), specificity (94% vs 90%, p < 0.001), and accuracy (86% vs 81%, p < 0.001), compared to residents without CADe. • After removing access to the CADe system, residents' sensitivity (44% vs 46%, p = 0.666), specificity (90% vs 90%, p = 0.642), and accuracy (80% vs 80%, p = 0.955) returned to that of the level for the group without CADe.
Collapse
Affiliation(s)
- Guillaume Chassagnon
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France.
- Université de Paris, 27 Rue du Faubourg Saint-Jacques, 85 Boulevard Saint-Germain, 75006, Paris, France.
| | - Nicolas Billet
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France
| | - Caroline Rutten
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France
| | - Thibault Toussaint
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France
| | | | - Mégane Collin
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France
| | - Leila Lemouchi
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France
| | - Margaux Homps
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France
| | - Mohamed Hedjoudje
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France
| | | | - Jules Gregory
- Université de Paris, 27 Rue du Faubourg Saint-Jacques, 85 Boulevard Saint-Germain, 75006, Paris, France
- Radiology Department, FHU MOSAIC, Hôpital Beaujon, 100 Bd du Général Leclerc, 92110, Clichy, France
| | - Emma Canniff
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France
| | - Nor-Eddine Regnard
- Gleamer, 117 Quai de Valmy, 75010, Paris, France
- Réseau d'Imagerie Sud Francilien, 254 Ter Avenue Henri Barbusse, 91210, Draveil, France
| | - Souhail Bennani
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France
- Gleamer, 117 Quai de Valmy, 75010, Paris, France
| | - Marie-Pierre Revel
- Radiology Department, Hôpital Cochin, AP-HP, 27 Rue du Faubourg Saint-Jacques, 75014, Paris, France
- Université de Paris, 27 Rue du Faubourg Saint-Jacques, 85 Boulevard Saint-Germain, 75006, Paris, France
| |
Collapse
|
21
|
Martin MD, Henry TS, Berry MF, Johnson GB, Kelly AM, Ko JP, Kuzniewski CT, Lee E, Maldonado F, Morris MF, Munden RF, Raptis CA, Shim K, Sirajuddin A, Small W, Tong BC, Wu CC, Donnelly EF. ACR Appropriateness Criteria® Incidentally Detected Indeterminate Pulmonary Nodule. J Am Coll Radiol 2023; 20:S455-S470. [PMID: 38040464 DOI: 10.1016/j.jacr.2023.08.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 08/22/2023] [Indexed: 12/03/2023]
Abstract
Incidental pulmonary nodules are common. Although the majority are benign, most are indeterminate for malignancy when first encountered making their management challenging. CT remains the primary imaging modality to first characterize and follow-up incidental lung nodules. This document reviews available literature on various imaging modalities and summarizes management of indeterminate pulmonary nodules detected incidentally. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision process support the systematic analysis of the medical literature from peer reviewed journals. Established methodology principles such as Grading of Recommendations Assessment, Development, and Evaluation or GRADE are adapted to evaluate the evidence. The RAND/UCLA Appropriateness Method User Manual provides the methodology to determine the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where peer reviewed literature is lacking or equivocal, experts may be the primary evidentiary source available to formulate a recommendation.
Collapse
Affiliation(s)
- Maria D Martin
- University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin.
| | | | - Mark F Berry
- Stanford University Medical Center, Stanford, California; Society of Thoracic Surgeons
| | - Geoffrey B Johnson
- Mayo Clinic, Rochester, Minnesota; Commission on Nuclear Medicine and Molecular Imaging
| | | | - Jane P Ko
- New York University Langone Health, New York, New York; IF Committee
| | | | - Elizabeth Lee
- University of Michigan Health System, Ann Arbor, Michigan
| | - Fabien Maldonado
- Vanderbilt University Medical Center, Nashville, Tennessee; American College of Chest Physicians
| | | | - Reginald F Munden
- Medical University of South Carolina, Charleston, South Carolina; IF Committee
| | | | - Kyungran Shim
- John H. Stroger, Jr. Hospital of Cook County, Chicago, Illinois; American College of Physicians
| | | | - William Small
- Loyola University Chicago, Stritch School of Medicine, Department of Radiation Oncology, Cardinal Bernardin Cancer Center, Maywood, Illinois; Commission on Radiation Oncology
| | - Betty C Tong
- Duke University School of Medicine, Durham, North Carolina; Society of Thoracic Surgeons
| | - Carol C Wu
- The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Edwin F Donnelly
- Specialty Chair, Ohio State University Wexner Medical Center, Columbus, Ohio
| |
Collapse
|
22
|
Lei C, Qu M, Sun H, Huang J, Huang J, Song X, Zhai G, Zhou H. Facial expression of patients with Graves' orbitopathy. J Endocrinol Invest 2023; 46:2055-2066. [PMID: 37005981 DOI: 10.1007/s40618-023-02054-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 02/27/2023] [Indexed: 04/04/2023]
Abstract
PURPOSE Patients with Graves' orbitopathy (GO) have characteristic facial expressions that are different from those of healthy individuals due to the combination of somatic and psychiatric symptoms. However, the facial expressions of GO patients have not yet been described and analyzed systematically. Thus, the present study aimed to present the facial expressions of GO patients and explore their applications in clinical practice. METHODS Facial image and clinical data of 943 GO patients were included, and 126 patients answered quality of life (GO-QOL) questionnaires. Each patient was labeled for one facial expression. Then, a portrait was drawn for every facial expression. Logistic and linear regression was performed to analyze the correlation between facial expression and clinical indicators, including QOL, disease activity and severity. The VGG-19 network model was utilized to discriminate facial expressions automatically. RESULTS Two groups, i.e., the non-negative emotion (neutral, happy) and the negative emotion (disgust, angry, fear, sadness, surprise), and seven expressions of GO patients were systematically analyzed. Facial expression was statistically associated with GO activity (P = 0.002), severity (P < 0.001), QOL visual functioning subscale scores (P = 0.001), and QOL appearance subscale score (P = 0.012). The deep learning model achieved satisfactory results (accuracy 0.851, sensitivity 0.899, precision 0.899, specificity 0.720, F1 score 0.899, and AUC 0.847). CONCLUSIONS As a novel clinical sign, facial expression holds the potential to be incorporated into GO assessment system in the future. The discrimination model may assist clinicians in real-life patient care.
Collapse
Affiliation(s)
- C Lei
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - M Qu
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - H Sun
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - J Huang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - J Huang
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - X Song
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China.
| | - G Zhai
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - H Zhou
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China.
| |
Collapse
|
23
|
Manzano C, Fuentes-Martín Á, Zuil M, Gil Barturen M, González J, Cilleruelo-Ramos Á. [Questions and Answers in Lung Cancer]. OPEN RESPIRATORY ARCHIVES 2023; 5:100264. [PMID: 37727151 PMCID: PMC10505677 DOI: 10.1016/j.opresp.2023.100264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 08/08/2023] [Indexed: 09/21/2023] Open
Abstract
Over the past 2 decades, scientific evidence has strongly supported the use of low-radiation dose chest computed tomography (CT) as a screening technique for lung cancer. This approach has resulted in a significant reduction in mortality rates by enabling the detection of early-stage lung cancer amenable to potentially curative treatments. Regarding diagnosis, there are also novel methods under study, such as liquid biopsy, identification of the pulmonary microbiome, and the use of artificial intelligence techniques, which will play a key role in the near future. At present, there is a growing trend towards less invasive surgical procedures, such as segmentectomy, as an alternative to lobectomy. This procedure is based on 2 recent clinical trials conducted on peripheral tumors measuring less than 2 cm. Although these approaches have demonstrated comparable survival rates, there remains controversy due to uncertainties surrounding recurrence rates and functional capacity preservation. With regard to adjuvant therapy, immunotherapy, either as a monotherapy or in conjunction with chemotherapy, has shown encouraging results in resectable stages of locally advanced lung cancer, demonstrating complete pathologic responses and improved overall survival.After surgery treatment, despite the lack of solid evidence for long-term follow-up of these patients, clinical practice recommends periodic CT scans during the early years.In conclusion, there have been significant advances in lung cancer that have improved diagnostic techniques using new technologies and screening programs. Furthermore, the treatment of lung cancer is increasingly personalized, resulting in an improvement in the survival of patients.
Collapse
Affiliation(s)
- Carlos Manzano
- Translational Research in Respiratory Medicine, University Hospital Arnau de Vilanova and Santa Maria, IRBLleida, Lérida, España
| | - Álvaro Fuentes-Martín
- Servicio de Cirugía Torácica, Hospital Clínico Universitario de Valladolid, Universidad de Valladolid, Valladolid, España
| | - Maria Zuil
- Translational Research in Respiratory Medicine, University Hospital Arnau de Vilanova and Santa Maria, IRBLleida, Lérida, España
| | - Mariana Gil Barturen
- Servicio de Cirugía Torácica, Hospital Universitario Puerta de Hierro, Majadahonda (Madrid), España
| | - Jessica González
- Translational Research in Respiratory Medicine, University Hospital Arnau de Vilanova and Santa Maria, IRBLleida, Lérida, España
- CIBER of Respiratory Diseases (CIBERES), Institute of Health Carlos III, Madrid, España
| | - Ángel Cilleruelo-Ramos
- Servicio de Cirugía Torácica, Hospital Clínico Universitario de Valladolid, Universidad de Valladolid, Valladolid, España
| |
Collapse
|
24
|
Behrendt F, Bengs M, Bhattacharya D, Krüger J, Opfer R, Schlaefer A. A systematic approach to deep learning-based nodule detection in chest radiographs. Sci Rep 2023; 13:10120. [PMID: 37344565 DOI: 10.1038/s41598-023-37270-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 06/19/2023] [Indexed: 06/23/2023] Open
Abstract
Lung cancer is a serious disease responsible for millions of deaths every year. Early stages of lung cancer can be manifested in pulmonary lung nodules. To assist radiologists in reducing the number of overseen nodules and to increase the detection accuracy in general, automatic detection algorithms have been proposed. Particularly, deep learning methods are promising. However, obtaining clinically relevant results remains challenging. While a variety of approaches have been proposed for general purpose object detection, these are typically evaluated on benchmark data sets. Achieving competitive performance for specific real-world problems like lung nodule detection typically requires careful analysis of the problem at hand and the selection and tuning of suitable deep learning models. We present a systematic comparison of state-of-the-art object detection algorithms for the task of lung nodule detection. In this regard, we address the critical aspect of class imbalance and and demonstrate a data augmentation approach as well as transfer learning to boost performance. We illustrate how this analysis and a combination of multiple architectures results in state-of-the-art performance for lung nodule detection, which is demonstrated by the proposed model winning the detection track of the Node21 competition. The code for our approach is available at https://github.com/FinnBehrendt/node21-submit.
Collapse
Affiliation(s)
- Finn Behrendt
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany.
| | - Marcel Bengs
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| | - Debayan Bhattacharya
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| | | | | | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| |
Collapse
|
25
|
Kim C, Yang Z, Park SH, Hwang SH, Oh YW, Kang EY, Yong HS. Multicentre external validation of a commercial artificial intelligence software to analyse chest radiographs in health screening environments with low disease prevalence. Eur Radiol 2023; 33:3501-3509. [PMID: 36624227 DOI: 10.1007/s00330-022-09315-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 10/13/2022] [Accepted: 11/22/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVES To externally validate the performance of a commercial AI software program for interpreting CXRs in a large, consecutive, real-world cohort from primary healthcare centres. METHODS A total of 3047 CXRs were collected from two primary healthcare centres, characterised by low disease prevalence, between January and December 2018. All CXRs were labelled as normal or abnormal according to CT findings. Four radiology residents read all CXRs twice with and without AI assistance. The performances of the AI and readers with and without AI assistance were measured in terms of area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity. RESULTS The prevalence of clinically significant lesions was 2.2% (68 of 3047). The AUROC, sensitivity, and specificity of the AI were 0.648 (95% confidence interval [CI] 0.630-0.665), 35.3% (CI, 24.7-47.8), and 94.2% (CI, 93.3-95.0), respectively. AI detected 12 of 41 pneumonia, 3 of 5 tuberculosis, and 9 of 22 tumours. AI-undetected lesions tended to be smaller than true-positive lesions. The readers' AUROCs ranged from 0.534-0.676 without AI and 0.571-0.688 with AI (all p values < 0.05). For all readers, the mean reading time was 2.96-10.27 s longer with AI assistance (all p values < 0.05). CONCLUSIONS The performance of commercial AI in these high-volume, low-prevalence settings was poorer than expected, although it modestly boosted the performance of less-experienced readers. The technical prowess of AI demonstrated in experimental settings and approved by regulatory bodies may not directly translate to real-world practice, especially where the demand for AI assistance is highest. KEY POINTS • This study shows the limited applicability of commercial AI software for detecting abnormalities in CXRs in a health screening population. • When using AI software in a specific clinical setting that differs from the training setting, it is necessary to adjust the threshold or perform additional training with such data that reflects this environment well. • Prospective test accuracy studies, randomised controlled trials, or cohort studies are needed to examine AI software to be implemented in real clinical practice.
Collapse
Affiliation(s)
- Cherry Kim
- Department of Radiology, Ansan Hospital, Korea University College of Medicine, 123, Jeokgeum-ro, Danwon-gu, Ansan-si, Gyeonggi, 15355, South Korea
| | - Zepa Yang
- Biomedical Research Center, Guro Hospital, Korea University College of Medicine, Seoul, 08308, South Korea
| | - Seong Ho Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, 05505, South Korea
| | - Sung Ho Hwang
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Seoul, 02841, South Korea
| | - Yu-Whan Oh
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Seoul, 02841, South Korea
| | - Eun-Young Kang
- Department of Radiology, Guro Hospital, Korea University College of Medicine, 33-41, Gurodong-ro 28-gil, Guro-gu, Seoul, 08308, South Korea
| | - Hwan Seok Yong
- Department of Radiology, Guro Hospital, Korea University College of Medicine, 33-41, Gurodong-ro 28-gil, Guro-gu, Seoul, 08308, South Korea.
| |
Collapse
|
26
|
Vasilev Y, Vladzymyrskyy A, Omelyanskaya O, Blokhin I, Kirpichev Y, Arzamasov K. AI-Based CXR First Reading: Current Limitations to Ensure Practical Value. Diagnostics (Basel) 2023; 13:diagnostics13081430. [PMID: 37189531 DOI: 10.3390/diagnostics13081430] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Revised: 04/04/2023] [Accepted: 04/13/2023] [Indexed: 05/17/2023] Open
Abstract
We performed a multicenter external evaluation of the practical and clinical efficacy of a commercial AI algorithm for chest X-ray (CXR) analysis (Lunit INSIGHT CXR). A retrospective evaluation was performed with a multi-reader study. For a prospective evaluation, the AI model was run on CXR studies; the results were compared to the reports of 226 radiologists. In the multi-reader study, the area under the curve (AUC), sensitivity, and specificity of the AI were 0.94 (CI95%: 0.87-1.0), 0.9 (CI95%: 0.79-1.0), and 0.89 (CI95%: 0.79-0.98); the AUC, sensitivity, and specificity of the radiologists were 0.97 (CI95%: 0.94-1.0), 0.9 (CI95%: 0.79-1.0), and 0.95 (CI95%: 0.89-1.0). In most regions of the ROC curve, the AI performed a little worse or at the same level as an average human reader. The McNemar test showed no statistically significant differences between AI and radiologists. In the prospective study with 4752 cases, the AUC, sensitivity, and specificity of the AI were 0.84 (CI95%: 0.82-0.86), 0.77 (CI95%: 0.73-0.80), and 0.81 (CI95%: 0.80-0.82). Lower accuracy values obtained during the prospective validation were mainly associated with false-positive findings considered by experts to be clinically insignificant and the false-negative omission of human-reported "opacity", "nodule", and calcification. In a large-scale prospective validation of the commercial AI algorithm in clinical practice, lower sensitivity and specificity values were obtained compared to the prior retrospective evaluation of the data of the same population.
Collapse
Affiliation(s)
- Yuriy Vasilev
- State Budget-Funded Health Care Institution of the City of Moscow "Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department", Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| | - Anton Vladzymyrskyy
- State Budget-Funded Health Care Institution of the City of Moscow "Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department", Petrovka Street, 24, Building 1, 127051 Moscow, Russia
- Department of Information and Internet Technologies, I.M. Sechenov First Moscow State Medical University of the Ministry of Health of the Russian Federation (Sechenov University), Trubetskaya Street, 8, Building 2, 119991 Moscow, Russia
| | - Olga Omelyanskaya
- State Budget-Funded Health Care Institution of the City of Moscow "Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department", Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| | - Ivan Blokhin
- State Budget-Funded Health Care Institution of the City of Moscow "Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department", Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| | - Yury Kirpichev
- State Budget-Funded Health Care Institution of the City of Moscow "Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department", Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| | - Kirill Arzamasov
- State Budget-Funded Health Care Institution of the City of Moscow "Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department", Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| |
Collapse
|
27
|
Li X, Du M, Zuo S, Zhou M, Peng Q, Chen Z, Zhou J, He Q. Deep convolutional neural networks using an active learning strategy for cervical cancer screening and diagnosis. FRONTIERS IN BIOINFORMATICS 2023; 3:1101667. [PMID: 36969799 PMCID: PMC10034408 DOI: 10.3389/fbinf.2023.1101667] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 02/13/2023] [Indexed: 03/12/2023] Open
Abstract
Cervical cancer (CC) is the fourth most common malignant tumor among women worldwide. Constructing a high-accuracy deep convolutional neural network (DCNN) for cervical cancer screening and diagnosis is important for the successful prevention of cervical cancer. In this work, we proposed a robust DCNN for cervical cancer screening using whole-slide images (WSI) of ThinPrep cytologic test (TCT) slides from 211 cervical cancer and 189 normal patients. We used an active learning strategy to improve the efficiency and accuracy of image labeling. The sensitivity, specificity, and accuracy of the best model were 96.21%, 98.95%, and 97.5% for CC patient identification respectively. Our results also demonstrated that the active learning strategy was superior to the traditional supervised learning strategy in cost reduction and improvement of image labeling quality. The related data and source code are freely available at https://github.com/hqyone/cancer_rcnn.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Junhua Zhou
- *Correspondence: Quanyuan He, ; Junhua Zhou,
| | - Quanyuan He
- *Correspondence: Quanyuan He, ; Junhua Zhou,
| |
Collapse
|
28
|
Niehoff JH, Kalaitzidis J, Kroeger JR, Schoenbeck D, Borggrefe J, Michael AE. Evaluation of the clinical performance of an AI-based application for the automated analysis of chest X-rays. Sci Rep 2023; 13:3680. [PMID: 36872333 PMCID: PMC9985819 DOI: 10.1038/s41598-023-30521-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 02/24/2023] [Indexed: 03/07/2023] Open
Abstract
The AI-Rad Companion Chest X-ray (AI-Rad, Siemens Healthineers) is an artificial-intelligence based application for the analysis of chest X-rays. The purpose of the present study is to evaluate the performance of the AI-Rad. In total, 499 radiographs were retrospectively included. Radiographs were independently evaluated by radiologists and the AI-Rad. Findings indicated by the AI-Rad and findings described in the written report (WR) were compared to the findings of a ground truth reading (consensus decision of two radiologists after assessing additional radiographs and CT scans). The AI-Rad can offer superior sensitivity for the detection of lung lesions (0.83 versus 0.52), consolidations (0.88 versus 0.78) and atelectasis (0.54 versus 0.43) compared to the WR. However, the superior sensitivity is accompanied by higher false-detection-rates. The sensitivity of the AI-Rad for the detection of pleural effusions is lower compared to the WR (0.74 versus 0.88). The negative-predictive-values (NPV) of the AI-Rad for the detection of all pre-defined findings are on a high level and comparable to the WR. The seemingly advantageous high sensitivity of the AI-Rad is partially offset by the disadvantage of a high false-detection-rate. At the current stage of development, therefore, the high NPVs may be the greatest benefit of the AI-Rad giving radiologists the possibility to re-insure their own negative search for pathologies and thus boosting their confidence in their reports.
Collapse
Affiliation(s)
- Julius Henning Niehoff
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany.
| | - Jana Kalaitzidis
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Jan Robert Kroeger
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Denise Schoenbeck
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Jan Borggrefe
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Arwed Elias Michael
- Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
29
|
Implementation of artificial intelligence in thoracic imaging-a what, how, and why guide from the European Society of Thoracic Imaging (ESTI). Eur Radiol 2023:10.1007/s00330-023-09409-2. [PMID: 36729173 PMCID: PMC9892666 DOI: 10.1007/s00330-023-09409-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 11/29/2022] [Accepted: 12/27/2022] [Indexed: 02/03/2023]
Abstract
This statement from the European Society of Thoracic imaging (ESTI) explains and summarises the essentials for understanding and implementing Artificial intelligence (AI) in clinical practice in thoracic radiology departments. This document discusses the current AI scientific evidence in thoracic imaging, its potential clinical utility, implementation and costs, training requirements and validation, its' effect on the training of new radiologists, post-implementation issues, and medico-legal and ethical issues. All these issues have to be addressed and overcome, for AI to become implemented clinically in thoracic radiology. KEY POINTS: • Assessing the datasets used for training and validation of the AI system is essential. • A departmental strategy and business plan which includes continuing quality assurance of AI system and a sustainable financial plan is important for successful implementation. • Awareness of the negative effect on training of new radiologists is vital.
Collapse
|
30
|
Huang S, Yang J, Shen N, Xu Q, Zhao Q. Artificial intelligence in lung cancer diagnosis and prognosis: Current application and future perspective. Semin Cancer Biol 2023; 89:30-37. [PMID: 36682439 DOI: 10.1016/j.semcancer.2023.01.006] [Citation(s) in RCA: 33] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 01/18/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
Lung cancer is one of the malignant tumors with the highest incidence and mortality in the world. The overall five-year survival rate of lung cancer is relatively lower than many leading cancers. Early diagnosis and prognosis of lung cancer are essential to improve the patient's survival rate. With artificial intelligence (AI) approaches widely applied in lung cancer, early diagnosis and prediction have achieved excellent performance in recent years. This review summarizes various types of AI algorithm applications in lung cancer, including natural language processing (NLP), machine learning and deep learning, and reinforcement learning. In addition, we provides evidence regarding the application of AI in lung cancer diagnostic and clinical prognosis. This review aims to elucidate the value of AI in lung cancer diagnosis and prognosis as the novel screening decision-making for the precise treatment of lung cancer patients.
Collapse
Affiliation(s)
- Shigao Huang
- Department of Radiation Oncology, The First Affiliated Hospital, Air Force Medical University, Xi'an, Shanxi, China
| | - Jie Yang
- Chongqing Industry&Trade Polytechnic, Chongqing, China
| | - Na Shen
- Hong Kong Shue Yan University, Hong Kong, China
| | - Qingsong Xu
- Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
| | - Qi Zhao
- Cancer Center, Institute of Translational Medicine, Faculty of Health Sciences, University of Macau, Taipa, Macau SAR, China; MoE Frontiers Science Center for Precision Oncology, University of Macau, Taipa, Macau SAR, China.
| |
Collapse
|
31
|
Kim H, Lee KH, Han K, Lee JW, Kim JY, Im DJ, Hong YJ, Choi BW, Hur J. Development and Validation of a Deep Learning-Based Synthetic Bone-Suppressed Model for Pulmonary Nodule Detection in Chest Radiographs. JAMA Netw Open 2023; 6:e2253820. [PMID: 36719681 PMCID: PMC9890286 DOI: 10.1001/jamanetworkopen.2022.53820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 12/01/2022] [Indexed: 02/01/2023] Open
Abstract
Importance Dual-energy chest radiography exhibits better sensitivity than single-energy chest radiography, partly due to its ability to remove overlying anatomical structures. Objectives To develop and validate a deep learning-based synthetic bone-suppressed (DLBS) nodule-detection algorithm for pulmonary nodule detection on chest radiographs. Design, Setting, and Participants This decision analytical modeling study used data from 3 centers between November 2015 and July 2019 from 1449 patients. The DLBS nodule-detection algorithm was trained using single-center data (institute 1) of 998 chest radiographs. The DLBS algorithm was validated using 2 external data sets (institute 2, 246 patients; and institute 3, 205 patients). Statistical analysis was performed from March to December 2021. Exposures DLBS nodule-detection algorithm. Main Outcomes and Measures The nodule-detection performance of DLBS model was compared with the convolution neural network nodule-detection algorithm (original model). Reader performance testing was conducted by 3 thoracic radiologists assisted by the DLBS algorithm or not. Sensitivity and false-positive markings per image (FPPI) were compared. Results Training data consisted of 998 patients (539 men [54.0%]; mean [SD] age, 54.2 [9.82] years), and 2 external validation data sets consisted of 246 patients (133 men [54.1%]; mean [SD] age, 55.3 [8.7] years) and 205 patients (105 men [51.2%]; mean [SD] age, 51.8 [9.1] years). Using the external validation data set of institute 2, the bone-suppressed model showed higher sensitivity compared with that of the original model for nodule detection (91.5% [109 of 119] vs 79.8% [95 of 119]; P < .001). The overall mean of FPPI with the bone-suppressed model was reduced compared with the original model (0.07 [17 of 246] vs 0.09 [23 of 246]; P < .001). For the observer performance testing with the data of institute 3, the mean sensitivity of 3 radiologists was 77.5% (95% [CI], 69.9%-85.2%), whereas that of radiologists assisted by DLBS modeling was 92.1% (95% CI, 86.3%-97.3%; P < .001). The 3 radiologists had a reduced number of FPPI when assisted by the DLBS model (0.071 [95% CI, 0.041-0.111] vs 0.151 [95% CI, 0.111-0.210]; P < .001). Conclusions and Relevance This decision analytical modeling study found that the DLBS model was more sensitive to detecting pulmonary nodules on chest radiographs compared with the original model. These findings suggest that the DLBS model could be beneficial to radiologists in the detection of lung nodules in chest radiographs without need of the specialized equipment or increase of radiation dose.
Collapse
Affiliation(s)
- Hwiyoung Kim
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Kye Ho Lee
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
- Department of Radiology, Dankook University Hospital, Cheonan, Chungnam Province, Republic of Korea
| | - Kyunghwa Han
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Ji Won Lee
- Department of Radiology, Pusan National University Hospital, Pusan National University School of Medicine, Busan, Korea
- Medical Research Institute, Busan, Korea
| | - Jin Young Kim
- Department of Radiology, Dongsan Medical Center, Keimyung University College of Medicine, Daegu, Korea
| | - Dong Jin Im
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Yoo Jin Hong
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Byoung Wook Choi
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Jin Hur
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Image Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
32
|
Kwak SH, Kim EK, Kim MH, Lee EH, Shin HJ. Incidentally found resectable lung cancer with the usage of artificial intelligence on chest radiographs. PLoS One 2023; 18:e0281690. [PMID: 36897865 PMCID: PMC10004566 DOI: 10.1371/journal.pone.0281690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 01/29/2023] [Indexed: 03/11/2023] Open
Abstract
PURPOSE Detection of early lung cancer using chest radiograph remains challenging. We aimed to highlight the benefit of using artificial intelligence (AI) in chest radiograph with regard to its role in the unexpected detection of resectable early lung cancer. MATERIALS AND METHODS Patients with pathologically proven resectable lung cancer from March 2020 to February 2022 were retrospectively analyzed. Among them, we included patients with incidentally detected resectable lung cancer. Because commercially available AI-based lesion detection software was integrated for all chest radiographs in our hospital, we reviewed the clinical process of detecting lung cancer using AI in chest radiographs. RESULTS Among the 75 patients with pathologically proven resectable lung cancer, 13 (17.3%) had incidentally discovered lung cancer with a median size of 2.6 cm. Eight patients underwent chest radiograph for the evaluation of extrapulmonary diseases, while five underwent radiograph in preparation of an operation or procedure concerning other body parts. All lesions were detected as nodules by the AI-based software, and the median abnormality score for the nodules was 78%. Eight patients (61.5%) consulted a pulmonologist promptly on the same day when the chest radiograph was taken and before they received the radiologist's official report. Total and invasive sizes of the part-solid nodules were 2.3-3.3 cm and 0.75-2.2 cm, respectively. CONCLUSION This study demonstrates actual cases of unexpectedly detected resectable early lung cancer using AI-based lesion detection software. Our results suggest that AI is beneficial for incidental detection of early lung cancer in chest radiographs.
Collapse
Affiliation(s)
- Se Hyun Kwak
- Division of Pulmonology, Department of Internal Medicine, Allergy and Critical Care Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
| | - Myung Hyun Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
| | - Eun Hye Lee
- Division of Pulmonology, Department of Internal Medicine, Allergy and Critical Care Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
- * E-mail: (EHL); (HJS)
| | - Hyun Joo Shin
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
- Center for Digital Health, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin-si, Gyeonggi-do, Republic of Korea
- * E-mail: (EHL); (HJS)
| |
Collapse
|
33
|
Osarogiagbon RU, Yang PC, Sequist LV. Expanding the Reach and Grasp of Lung Cancer Screening. Am Soc Clin Oncol Educ Book 2023; 43:e389958. [PMID: 37098234 DOI: 10.1200/edbk_389958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/27/2023]
Abstract
Low-dose computer tomographic (LDCT) lung cancer screening reduces lung cancer-specific and all-cause mortality among high-risk individuals, but implementation has been challenging. Despite health insurance coverage for lung cancer screening in the United States since 2015, fewer than 10% of eligible persons have participated; striking geographic, racial, and socioeconomic disparities were already evident, especially in the populations at greatest risk of lung cancer and, therefore, most likely to benefit from screening; and adherence to subsequent testing is significantly lower than that reported in clinical trials, potentially reducing the realized benefit. Lung cancer screening is a covered health care benefit in very few countries. Obtaining the full population-level benefit of lung cancer screening will require improved participation of already eligible persons (the grasp of screening) and improved eligibility criteria that more closely match up with the full spectrum of persons at risk (the reach of screening), irrespective of smoking history. We used the socioecological framework of health care to systematically review implementation barriers to lung cancer screening and discuss multilevel solutions. We also discussed guideline-concordant management of incidentally detected lung nodules as a complementary approach to early lung cancer detection that can extend the reach and strengthen the grasp of screening. Furthermore, we discussed ongoing efforts in Asia to explore the possibility of LDCT screening in populations in whom lung cancer risk is relatively independent of smoking. Finally, we summarized innovative technological solutions, including biomarker selection and artificial intelligence strategies, to improve the safety, effectiveness, and cost-effectiveness of lung cancer screening in diverse populations.
Collapse
Affiliation(s)
- Raymond U Osarogiagbon
- Thoracic Oncology Research Group, Multidisciplinary Thoracic Oncology Program, Baptist Cancer Center, Memphis, TN
| | - Pan-Chyr Yang
- Department of Internal Medicine, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
- Institute of Biomedical Sciences, Academia Sinica, Taipei, Taiwan
- Genomics Research Center, Academia Sinica, Taipei, Taiwan
| | - Lecia V Sequist
- Massachusetts General Hospital and Harvard Medical School, Boston, MA
| |
Collapse
|
34
|
Choe J, Lee SM, Hwang HJ, Lee SM, Yun J, Kim N, Seo JB. Artificial Intelligence in Lung Imaging. Semin Respir Crit Care Med 2022; 43:946-960. [PMID: 36174647 DOI: 10.1055/s-0042-1755571] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Recently, interest and advances in artificial intelligence (AI) including deep learning for medical images have surged. As imaging plays a major role in the assessment of pulmonary diseases, various AI algorithms have been developed for chest imaging. Some of these have been approved by governments and are now commercially available in the marketplace. In the field of chest radiology, there are various tasks and purposes that are suitable for AI: initial evaluation/triage of certain diseases, detection and diagnosis, quantitative assessment of disease severity and monitoring, and prediction for decision support. While AI is a powerful technology that can be applied to medical imaging and is expected to improve our current clinical practice, some obstacles must be addressed for the successful implementation of AI in workflows. Understanding and becoming familiar with the current status and potential clinical applications of AI in chest imaging, as well as remaining challenges, would be essential for radiologists and clinicians in the era of AI. This review introduces the potential clinical applications of AI in chest imaging and also discusses the challenges for the implementation of AI in daily clinical practice and future directions in chest imaging.
Collapse
Affiliation(s)
- Jooae Choe
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Hye Jeon Hwang
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Jihye Yun
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea.,Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| |
Collapse
|
35
|
Artificial Intelligence (AI) for Lung Nodules, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2022; 219:703-712. [PMID: 35544377 DOI: 10.2214/ajr.22.27487] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Interest in artificial intelligence (AI) applications for lung nodules continues to grow among radiologists, particularly with the expanding eligibility criteria and clinical utilization of lung cancer screening CT. AI has been heavily investigated for detecting and characterizing lung nodules and for guiding prognostic assessment. AI tools have also been used for image postprocessing (e.g., rib suppression on radiography or vessel suppression on CT) and for noninterpretive aspects of reporting and workflow, including management of nodule follow-up. Despite growing interest in and rapid development of AI tools and FDA approval of AI tools for pulmonary nodule evaluation, integration into clinical practice has been limited. Challenges to clinical adoption have included concerns about generalizability, regulatory issues, technical hurdles in implementation, and human skepticism. Further validation of AI tools for clinical use and demonstration of benefit in terms of patient-oriented outcomes also are needed. This article provides an overview of potential applications of AI tools in the imaging evaluation of lung nodules and discusses the challenges faced by practices interested in clinical implementation of such tools.
Collapse
|
36
|
Yoo H, Kim EY, Kim H, Choi YR, Kim MY, Hwang SH, Kim YJ, Cho YJ, Jin KN. Artificial Intelligence-Based Identification of Normal Chest Radiographs: A Simulation Study in a Multicenter Health Screening Cohort. Korean J Radiol 2022; 23:1009-1018. [PMID: 36175002 PMCID: PMC9523233 DOI: 10.3348/kjr.2022.0189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 08/11/2022] [Accepted: 08/12/2022] [Indexed: 01/17/2023] Open
Abstract
Objective This study aimed to investigate the feasibility of using artificial intelligence (AI) to identify normal chest radiography (CXR) from the worklist of radiologists in a health-screening environment. Materials and Methods This retrospective simulation study was conducted using the CXRs of 5887 adults (mean age ± standard deviation, 55.4 ± 11.8 years; male, 4329) from three health screening centers in South Korea using a commercial AI (Lunit INSIGHT CXR3, version 3.5.8.8). Three board-certified thoracic radiologists reviewed CXR images for referable thoracic abnormalities and grouped the images into those with visible referable abnormalities (identified as abnormal by at least one reader) and those with clearly visible referable abnormalities (identified as abnormal by at least two readers). With AI-based simulated exclusion of normal CXR images, the percentages of normal images sorted and abnormal images erroneously removed were analyzed. Additionally, in a random subsample of 480 patients, the ability to identify visible referable abnormalities was compared among AI-unassisted reading (i.e., all images read by human readers without AI), AI-assisted reading (i.e., all images read by human readers with AI assistance as concurrent readers), and reading with AI triage (i.e., human reading of only those rendered abnormal by AI). Results Of 5887 CXR images, 405 (6.9%) and 227 (3.9%) contained visible and clearly visible abnormalities, respectively. With AI-based triage, 42.9% (2354/5482) of normal CXR images were removed at the cost of erroneous removal of 3.5% (14/405) and 1.8% (4/227) of CXR images with visible and clearly visible abnormalities, respectively. In the diagnostic performance study, AI triage removed 41.6% (188/452) of normal images from the worklist without missing visible abnormalities and increased the specificity for some readers without decreasing sensitivity. Conclusion This study suggests the feasibility of sorting and removing normal CXRs using AI with a tailored cut-off to increase efficiency and reduce the workload of radiologists.
Collapse
Affiliation(s)
- Hyunsuk Yoo
- Lunit Inc, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Korea
| | - Eun Young Kim
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, Korea
| | - Hyungjin Kim
- Department of Radiology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Korea
| | - Ye Ra Choi
- Department of Radiology, Seoul National University-Seoul Metropolitan Government Boramae Medical Center, Seoul, Korea
| | - Moon Young Kim
- Department of Radiology, Seoul National University-Seoul Metropolitan Government Boramae Medical Center, Seoul, Korea
| | - Sung Ho Hwang
- Department of Radiology, Korea University Anam Hospital, Seoul, Korea
| | - Young Joong Kim
- Department of Radiology, Konyang University Hospital, Konyang University College of Medicine, Daejeon, Korea
| | - Young Jun Cho
- Department of Radiology, Konyang University Hospital, Konyang University College of Medicine, Daejeon, Korea
| | - Kwang Nam Jin
- Department of Radiology, Seoul National University-Seoul Metropolitan Government Boramae Medical Center, Seoul, Korea.
| |
Collapse
|
37
|
Liu P, Lu L, Chen Y, Huo T, Xue M, Wang H, Fang Y, Xie Y, Xie M, Ye Z. Artificial intelligence to detect the femoral intertrochanteric fracture: The arrival of the intelligent-medicine era. Front Bioeng Biotechnol 2022; 10:927926. [PMID: 36147533 PMCID: PMC9486191 DOI: 10.3389/fbioe.2022.927926] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 08/04/2022] [Indexed: 12/09/2022] Open
Abstract
Objective: To explore a new artificial intelligence (AI)-aided method to assist the clinical diagnosis of femoral intertrochanteric fracture (FIF), and further compare the performance with human level to confirm the effect and feasibility of the AI algorithm.Methods: 700 X-rays of FIF were collected and labeled by two senior orthopedic physicians to set up the database, 643 for the training database and 57 for the test database. A Faster-RCNN algorithm was applied to be trained and detect the FIF on X-rays. The performance of the AI algorithm such as accuracy, sensitivity, miss diagnosis rate, specificity, misdiagnosis rate, and time consumption was calculated and compared with that of orthopedic attending physicians.Results: Compared with orthopedic attending physicians, the Faster-RCNN algorithm performed better in accuracy (0.88 vs. 0.84 ± 0.04), specificity (0.87 vs. 0.71 ± 0.08), misdiagnosis rate (0.13 vs. 0.29 ± 0.08), and time consumption (5 min vs. 18.20 ± 1.92 min). As for the sensitivity and missed diagnosis rate, there was no statistical difference between the AI and orthopedic attending physicians (0.89 vs. 0.87 ± 0.03 and 0.11 vs. 0.13 ± 0.03).Conclusion: The AI diagnostic algorithm is an available and effective method for the clinical diagnosis of FIF. It could serve as a satisfying clinical assistant for orthopedic physicians.
Collapse
Affiliation(s)
- Pengran Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lin Lu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yufei Chen
- Department of Orthopedics, The Second Affiliated Hospital of Xiangya School of Medicine, Central South University, Changsha, China
| | - Tongtong Huo
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Mingdi Xue
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Honglin Wang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ying Fang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yi Xie
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Mao Xie
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhewei Ye
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
38
|
Performance of a Chest Radiography AI Algorithm for Detection of Missed or Mislabeled Findings: A Multicenter Study. Diagnostics (Basel) 2022; 12:diagnostics12092086. [PMID: 36140488 PMCID: PMC9497851 DOI: 10.3390/diagnostics12092086] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 08/23/2022] [Accepted: 08/25/2022] [Indexed: 12/02/2022] Open
Abstract
Purpose: We assessed whether a CXR AI algorithm was able to detect missed or mislabeled chest radiograph (CXR) findings in radiology reports. Methods: We queried a multi-institutional radiology reports search database of 13 million reports to identify all CXR reports with addendums from 1999–2021. Of the 3469 CXR reports with an addendum, a thoracic radiologist excluded reports where addenda were created for typographic errors, wrong report template, missing sections, or uninterpreted signoffs. The remaining reports contained addenda (279 patients) with errors related to side-discrepancies or missed findings such as pulmonary nodules, consolidation, pleural effusions, pneumothorax, and rib fractures. All CXRs were processed with an AI algorithm. Descriptive statistics were performed to determine the sensitivity, specificity, and accuracy of the AI in detecting missed or mislabeled findings. Results: The AI had high sensitivity (96%), specificity (100%), and accuracy (96%) for detecting all missed and mislabeled CXR findings. The corresponding finding-specific statistics for the AI were nodules (96%, 100%, 96%), pneumothorax (84%, 100%, 85%), pleural effusion (100%, 17%, 67%), consolidation (98%, 100%, 98%), and rib fractures (87%, 100%, 94%). Conclusions: The CXR AI could accurately detect mislabeled and missed findings. Clinical Relevance: The CXR AI can reduce the frequency of errors in detection and side-labeling of radiographic findings.
Collapse
|
39
|
Lee SY, Ha S, Jeon MG, Li H, Choi H, Kim HP, Choi YR, I H, Jeong YJ, Park YH, Ahn H, Hong SH, Koo HJ, Lee CW, Kim MJ, Kim YJ, Kim KW, Choi JM. Localization-adjusted diagnostic performance and assistance effect of a computer-aided detection system for pneumothorax and consolidation. NPJ Digit Med 2022; 5:107. [PMID: 35908091 PMCID: PMC9339006 DOI: 10.1038/s41746-022-00658-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 07/11/2022] [Indexed: 11/24/2022] Open
Abstract
While many deep-learning-based computer-aided detection systems (CAD) have been developed and commercialized for abnormality detection in chest radiographs (CXR), their ability to localize a target abnormality is rarely reported. Localization accuracy is important in terms of model interpretability, which is crucial in clinical settings. Moreover, diagnostic performances are likely to vary depending on thresholds which define an accurate localization. In a multi-center, stand-alone clinical trial using temporal and external validation datasets of 1,050 CXRs, we evaluated localization accuracy, localization-adjusted discrimination, and calibration of a commercially available deep-learning-based CAD for detecting consolidation and pneumothorax. The CAD achieved image-level AUROC (95% CI) of 0.960 (0.945, 0.975), sensitivity of 0.933 (0.899, 0.959), specificity of 0.948 (0.930, 0.963), dice of 0.691 (0.664, 0.718), moderate calibration for consolidation, and image-level AUROC of 0.978 (0.965, 0.991), sensitivity of 0.956 (0.923, 0.978), specificity of 0.996 (0.989, 0.999), dice of 0.798 (0.770, 0.826), moderate calibration for pneumothorax. Diagnostic performances varied substantially when localization accuracy was accounted for but remained high at the minimum threshold of clinical relevance. In a separate trial for diagnostic impact using 461 CXRs, the causal effect of the CAD assistance on clinicians’ diagnostic performances was estimated. After adjusting for age, sex, dataset, and abnormality type, the CAD improved clinicians’ diagnostic performances on average (OR [95% CI] = 1.73 [1.30, 2.32]; p < 0.001), although the effects varied substantially by clinical backgrounds. The CAD was found to have high stand-alone diagnostic performances and may beneficially impact clinicians’ diagnostic performances when used in clinical settings.
Collapse
Affiliation(s)
- Sun Yeop Lee
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Sangwoo Ha
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Min Gyeong Jeon
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Hao Li
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Hyunju Choi
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Hwa Pyung Kim
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea
| | - Ye Ra Choi
- Department of Radiology, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, Republic of Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Hoseok I
- Department of Thoracic and Cardiovascular Surgery, Pusan National University School of Medicine, Busan, Republic of Korea.,Convergence Medical Institute of Technology, Biomedical Research Institute, Pusan National University Hospital, Busan, Republic of Korea
| | - Yeon Joo Jeong
- Department of Radiology and Biomedical Research Institute, Pusan National University Hospital, Busan, Republic of Korea
| | - Yoon Ha Park
- Department of Internal Medicine, Jawol Health Center, Incheon, Republic of Korea
| | - Hyemin Ahn
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Hyup Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Hyun Jung Koo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Choong Wook Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Min Jae Kim
- Department of Infectious Disease, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Yeon Joo Kim
- Department of Respiratory Allergy Medicine, Nowon Eulji Medical Center, Seoul, Republic of Korea
| | - Kyung Won Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jong Mun Choi
- Department of Medical Artificial Intelligence, Deepnoid, Inc., Seoul, Republic of Korea.
| |
Collapse
|
40
|
Gandomkar Z, Khong PL, Punch A, Lewis S. Using Occlusion-Based Saliency Maps to Explain an Artificial Intelligence Tool in Lung Cancer Screening: Agreement Between Radiologists, Labels, and Visual Prompts. J Digit Imaging 2022; 35:1164-1175. [PMID: 35484439 PMCID: PMC9582174 DOI: 10.1007/s10278-022-00631-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 03/03/2022] [Accepted: 04/04/2022] [Indexed: 11/29/2022] Open
Abstract
Occlusion-based saliency maps (OBSMs) are one of the approaches for interpreting decision-making process of an artificial intelligence (AI) system. This study explores the agreement among text responses from a cohort of radiologists to describe diagnostically relevant areas on low-dose CT (LDCT) images. It also explores if radiologists’ descriptions of cases misclassified by the AI provide a rationale for ruling out the AI’s output. The OBSM indicating the importance of different pixels on the final decision made by an AI were generated for 10 benign cases (3 misclassified by the AI tool as malignant) and 10 malignant cases (2 misclassified by the AI tool as benign). Thirty-six radiologists were asked to use radiological vocabulary, typical to reporting LDCT scans, to describe the mapped regions of interest (ROI). The radiologists’ annotations were then grouped by using a clustering-based technique. Topics were extracted from the annotations and for each ROI, a percentage of annotations containing each topic were found. Radiologists annotated 17 and 24 unique ROIs on benign and malignant cases, respectively. Agreement on the main label (e.g., “vessel,” “nodule”) by radiologists was only seen in only in 12% of all areas (5/41 ROI). Topic analyses identified six descriptors which are commonly associated with a lower malignancy likelihood. Eight common topics related to a higher malignancy likelihood were also determined. Occlusion-based saliency maps were used to explain an AI decision-making process to radiologists, who in turn have provided insight into the level of agreement between the AI’s decision and radiological lexicon.
Collapse
Affiliation(s)
- Ziba Gandomkar
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Pek Lan Khong
- Clinical Imaging Research Center (CIRC), Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Amanda Punch
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Sarah Lewis
- Discipline of Medical Imaging Science, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia.
| |
Collapse
|
41
|
Nielsen AH, Fredberg U. Earlier diagnosis of lung cancer. Cancer Treat Res Commun 2022; 31:100561. [PMID: 35489228 DOI: 10.1016/j.ctarc.2022.100561] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 04/01/2022] [Accepted: 04/06/2022] [Indexed: 06/14/2023]
Abstract
The purpose of this article is to review options for more rapid diagnosis of lung cancer at an earlier stage, thereby improving survival. These options include screening, allowing general practitioners to refer patients directly to low-dose computed tomography scan instead of a chest X-ray and the abolition of the "visitation filter", i.e. hospital doctors' ability to reject referrals from general practitioners without prior discussion with the referring doctor.
Collapse
|
42
|
Ye M, Tong L, Zheng X, Wang H, Zhou H, Zhu X, Zhou C, Zhao P, Wang Y, Wang Q, Bai L, Cai Z, Kong FMS, Wang Y, Li Y, Feng M, Ye X, Yang D, Liu Z, Zhang Q, Wang Z, Han S, Sun L, Zhao N, Yu Z, Zhang J, Zhang X, Katz RL, Sun J, Bai C. A Classifier for Improving Early Lung Cancer Diagnosis Incorporating Artificial Intelligence and Liquid Biopsy. Front Oncol 2022; 12:853801. [PMID: 35311112 PMCID: PMC8924612 DOI: 10.3389/fonc.2022.853801] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 02/07/2022] [Indexed: 12/19/2022] Open
Abstract
Lung cancer is the leading cause of cancer-related deaths worldwide and in China. Screening for lung cancer by low dose computed tomography (LDCT) can reduce mortality but has resulted in a dramatic rise in the incidence of indeterminate pulmonary nodules, which presents a major diagnostic challenge for clinicians regarding their underlying pathology and can lead to overdiagnosis. To address the significant gap in evaluating pulmonary nodules, we conducted a prospective study to develop a prediction model for individuals at intermediate to high risk of developing lung cancer. Univariate and multivariate logistic analyses were applied to the training cohort (n = 560) to develop an early lung cancer prediction model. The results indicated that a model integrating clinical characteristics (age and smoking history), radiological characteristics of pulmonary nodules (nodule diameter, nodule count, upper lobe location, malignant sign at the nodule edge, subsolid status), artificial intelligence analysis of LDCT data, and liquid biopsy achieved the best diagnostic performance in the training cohort (sensitivity 89.53%, specificity 81.31%, area under the curve [AUC] = 0.880). In the independent validation cohort (n = 168), this model had an AUC of 0.895, which was greater than that of the Mayo Clinic Model (AUC = 0.772) and Veterans' Affairs Model (AUC = 0.740). These results were significantly better for predicting the presence of cancer than radiological features and artificial intelligence risk scores alone. Applying this classifier prospectively may lead to improved early lung cancer diagnosis and early treatment for patients with malignant nodules while sparing patients with benign entities from unnecessary and potentially harmful surgery. Clinical Trial Registration Number ChiCTR1900026233, URL: http://www.chictr.org.cn/showproj.aspx?proj=43370.
Collapse
Affiliation(s)
- Maosong Ye
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Lin Tong
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital, Fudan University, Shanghai, China.,Shanghai Respiratory Research Institute, Shanghai, China
| | - Xiaoxuan Zheng
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China.,Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Hui Wang
- Xinxiang Medical University, Xinxiang, China.,Department of Respiratory and Critical Care Medicine, Henan Provincial People's Hospital, People's Hospital of Zhengzhou University, Zhengzhou, China
| | - Haining Zhou
- Department of Thoracic Surgery, Respiratory Center of Suining Central Hospital, Suining, China
| | - Xiaoli Zhu
- Department of Pulmonary and Critical Care Medicine, Zhongda Hospital, Southeast University, Nanjing, China
| | - Chengzhi Zhou
- State Key Laboratory of Respiratory Disease, National Clinical Research Center of Respiratory Disease, Guangzhou Institute of Respiratory Health, First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Peige Zhao
- Department of Respiratory and Critical Care Medicine, Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yan Wang
- Department of Respiratory and Critical Care Medicine, Liaocheng People's Hospital, Liaocheng, China
| | - Qi Wang
- Department of Respiratory Medicine, The Second Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Li Bai
- Department of Respiratory Disease, Xinqiao Hospital, Army Medical University, Chongqing, China
| | - Zhigang Cai
- The First Department of Pulmonary and Critical Care Medicine, The Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Feng-Ming Spring Kong
- Clinical Oncology Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen, China
| | - Yuehong Wang
- Department of Respiratory Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Yafei Li
- Department of Epidemiology, College of Preventive Medicine, Army Medical University, Chongqing, China
| | - Mingxiang Feng
- Division of Thoracic Surgery, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Xin Ye
- Joint Research Center of Liquid Biopsy in Guangdong, Hong Kong, and Macao, Zhuhai, China.,Zhuhai Sanmed Biotech Ltd., Zhuhai, China
| | - Dawei Yang
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Zilong Liu
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Quncheng Zhang
- Department of Respiratory and Critical Care Medicine, Henan Provincial People's Hospital, People's Hospital of Zhengzhou University, Zhengzhou, China
| | - Ziqi Wang
- Department of Respiratory and Critical Care Medicine, Henan Provincial People's Hospital, People's Hospital of Zhengzhou University, Zhengzhou, China
| | - Shuhua Han
- Department of Pulmonary and Critical Care Medicine, Zhongda Hospital, Southeast University, Nanjing, China
| | - Lihong Sun
- Department of Respiratory and Critical Care Medicine, Liaocheng People's Hospital, Liaocheng, China
| | - Ningning Zhao
- Department of Respiratory and Critical Care Medicine, Liaocheng People's Hospital, Liaocheng, China
| | - Zubin Yu
- Department of Thoracic Surgery, Xinqiao Hospital, Army Medical University, Chongqing, China
| | - Juncheng Zhang
- Joint Research Center of Liquid Biopsy in Guangdong, Hong Kong, and Macao, Zhuhai, China.,Zhuhai Sanmed Biotech Ltd., Zhuhai, China
| | - Xiaoju Zhang
- Department of Respiratory and Critical Care Medicine, Henan Provincial People's Hospital, People's Hospital of Zhengzhou University, Zhengzhou, China
| | - Ruth L Katz
- Chaim Sheba Hospital, Tel Aviv University, Ramat Gan, Israel
| | - Jiayuan Sun
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China.,Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Chunxue Bai
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
43
|
Deep learning-based algorithm for lung cancer detection on chest radiographs using the segmentation method. Sci Rep 2022; 12:727. [PMID: 35031654 PMCID: PMC8760245 DOI: 10.1038/s41598-021-04667-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 12/29/2021] [Indexed: 12/24/2022] Open
Abstract
We developed and validated a deep learning (DL)-based model using the segmentation method and assessed its ability to detect lung cancer on chest radiographs. Chest radiographs for use as a training dataset and a test dataset were collected separately from January 2006 to June 2018 at our hospital. The training dataset was used to train and validate the DL-based model with five-fold cross-validation. The model sensitivity and mean false positive indications per image (mFPI) were assessed with the independent test dataset. The training dataset included 629 radiographs with 652 nodules/masses and the test dataset included 151 radiographs with 159 nodules/masses. The DL-based model had a sensitivity of 0.73 with 0.13 mFPI in the test dataset. Sensitivity was lower in lung cancers that overlapped with blind spots such as pulmonary apices, pulmonary hila, chest wall, heart, and sub-diaphragmatic space (0.50–0.64) compared with those in non-overlapped locations (0.87). The dice coefficient for the 159 malignant lesions was on average 0.52. The DL-based model was able to detect lung cancers on chest radiographs, with low mFPI.
Collapse
|
44
|
Artificial Intelligence to Diagnose Tibial Plateau Fractures: An Intelligent Assistant for Orthopedic Physicians. Curr Med Sci 2022; 41:1158-1164. [PMID: 34971441 PMCID: PMC8718992 DOI: 10.1007/s11596-021-2501-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Accepted: 11/18/2021] [Indexed: 01/03/2023]
Abstract
Objective To explore a new artificial intelligence (AI)-aided method to assist the clinical diagnosis of tibial plateau fractures (TPFs) and further measure its validity and feasibility. Methods A total of 542 X-rays of TPFs were collected as a reference database. An AI algorithm (RetinaNet) was trained to analyze and detect TPF on the X-rays. The ability of the AI algorithm was determined by indexes such as detection accuracy and time taken for analysis. The algorithm performance was also compared with orthopedic physicians. Results The AI algorithm showed a detection accuracy of 0.91 for the identification of TPF, which was similar to the performance of orthopedic physicians (0.92±0.03). The average time spent for analysis of the AI was 0.56 s, which was 16 times faster than human performance (8.44±3.26 s). Conclusion The AI algorithm is a valid and efficient method for the clinical diagnosis of TPF. It can be a useful assistant for orthopedic physicians, which largely promotes clinical workflow and further guarantees the health and security of patients.
Collapse
|
45
|
Current and emerging artificial intelligence applications in chest imaging: a pediatric perspective. Pediatr Radiol 2022; 52:2120-2130. [PMID: 34471961 PMCID: PMC8409695 DOI: 10.1007/s00247-021-05146-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 05/22/2021] [Accepted: 06/28/2021] [Indexed: 12/19/2022]
Abstract
Artificial intelligence (AI) applications for chest radiography and chest CT are among the most developed applications in radiology. More than 40 certified AI products are available for chest radiography or chest CT. These AI products cover a wide range of abnormalities, including pneumonia, pneumothorax and lung cancer. Most applications are aimed at detecting disease, complemented by products that characterize or quantify tissue. At present, none of the thoracic AI products is specifically designed for the pediatric population. However, some products developed to detect tuberculosis in adults are also applicable to children. Software is under development to detect early changes of cystic fibrosis on chest CT, which could be an interesting application for pediatric radiology. In this review, we give an overview of current AI products in thoracic radiology and cover recent literature about AI in chest radiography, with a focus on pediatric radiology. We also discuss possible pediatric applications.
Collapse
|
46
|
Homayounieh F, Digumarthy S, Ebrahimian S, Rueckel J, Hoppe BF, Sabel BO, Conjeti S, Ridder K, Sistermanns M, Wang L, Preuhs A, Ghesu F, Mansoor A, Moghbel M, Botwin A, Singh R, Cartmell S, Patti J, Huemmer C, Fieselmann A, Joerger C, Mirshahzadeh N, Muse V, Kalra M. An Artificial Intelligence-Based Chest X-ray Model on Human Nodule Detection Accuracy From a Multicenter Study. JAMA Netw Open 2021; 4:e2141096. [PMID: 34964851 PMCID: PMC8717119 DOI: 10.1001/jamanetworkopen.2021.41096] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
IMPORTANCE Most early lung cancers present as pulmonary nodules on imaging, but these can be easily missed on chest radiographs. OBJECTIVE To assess if a novel artificial intelligence (AI) algorithm can help detect pulmonary nodules on radiographs at different levels of detection difficulty. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study included 100 posteroanterior chest radiograph images taken between 2000 and 2010 of adult patients from an ambulatory health care center in Germany and a lung image database in the US. Included images were selected to represent nodules with different levels of detection difficulties (from easy to difficult), and comprised both normal and nonnormal control. EXPOSURES All images were processed with a novel AI algorithm, the AI Rad Companion Chest X-ray. Two thoracic radiologists established the ground truth and 9 test radiologists from Germany and the US independently reviewed all images in 2 sessions (unaided and AI-aided mode) with at least a 1-month washout period. MAIN OUTCOMES AND MEASURES Each test radiologist recorded the presence of 5 findings (pulmonary nodules, atelectasis, consolidation, pneumothorax, and pleural effusion) and their level of confidence for detecting the individual finding on a scale of 1 to 10 (1 representing lowest confidence; 10, highest confidence). The analyzed metrics for nodules included sensitivity, specificity, accuracy, and receiver operating characteristics curve area under the curve (AUC). RESULTS Images from 100 patients were included, with a mean (SD) age of 55 (20) years and including 64 men and 36 women. Mean detection accuracy across the 9 radiologists improved by 6.4% (95% CI, 2.3% to 10.6%) with AI-aided interpretation compared with unaided interpretation. Partial AUCs within the effective interval range of 0 to 0.2 false positive rate improved by 5.6% (95% CI, -1.4% to 12.0%) with AI-aided interpretation. Junior radiologists saw greater improvement in sensitivity for nodule detection with AI-aided interpretation as compared with their senior counterparts (12%; 95% CI, 4% to 19% vs 9%; 95% CI, 1% to 17%) while senior radiologists experienced similar improvement in specificity (4%; 95% CI, -2% to 9%) as compared with junior radiologists (4%; 95% CI, -3% to 5%). CONCLUSIONS AND RELEVANCE In this diagnostic study, an AI algorithm was associated with improved detection of pulmonary nodules on chest radiographs compared with unaided interpretation for different levels of detection difficulty and for readers with different experience.
Collapse
Affiliation(s)
- Fatemeh Homayounieh
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Subba Digumarthy
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Shadi Ebrahimian
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Johannes Rueckel
- Department of Radiology, University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | - Boj Friedrich Hoppe
- Department of Radiology, University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | - Bastian Oliver Sabel
- Department of Radiology, University Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | | | - Karsten Ridder
- Medizinisches Versorgungszentrum Professor Uhlenbrock & Partner
| | | | | | | | | | | | - Mateen Moghbel
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Ariel Botwin
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Samuel Cartmell
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - John Patti
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | | | | | | | | | - Victorine Muse
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Mannudeep Kalra
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
47
|
Torres FS, Akbar S, Raman S, Yasufuku K, Schmidt C, Hosny A, Baldauf-Lenschen F, Leighl NB. End-to-End Non-Small-Cell Lung Cancer Prognostication Using Deep Learning Applied to Pretreatment Computed Tomography. JCO Clin Cancer Inform 2021; 5:1141-1150. [PMID: 34797702 DOI: 10.1200/cci.21.00096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
PURPOSE Clinical TNM staging is a key prognostic factor for patients with lung cancer and is used to inform treatment and monitoring. Computed tomography (CT) plays a central role in defining the stage of disease. Deep learning applied to pretreatment CTs may offer additional, individualized prognostic information to facilitate more precise mortality risk prediction and stratification. METHODS We developed a fully automated imaging-based prognostication technique (IPRO) using deep learning to predict 1-year, 2-year, and 5-year mortality from pretreatment CTs of patients with stage I-IV lung cancer. Using six publicly available data sets from The Cancer Imaging Archive, we performed a retrospective five-fold cross-validation using pretreatment CTs of 1,689 patients, of whom 1,110 were diagnosed with non-small-cell lung cancer and had available TNM staging information. We compared the association of IPRO and TNM staging with patients' survival status and assessed an Ensemble risk score that combines IPRO and TNM staging. Finally, we evaluated IPRO's ability to stratify patients within TNM stages using hazard ratios (HRs) and Kaplan-Meier curves. RESULTS IPRO showed similar prognostic power (concordance index [C-index] 1-year: 0.72, 2-year: 0.70, 5-year: 0.68) compared with that of TNM staging (C-index 1-year: 0.71, 2-year: 0.71, 5-year: 0.70) in predicting 1-year, 2-year, and 5-year mortality. The Ensemble risk score yielded superior performance across all time points (C-index 1-year: 0.77, 2-year: 0.77, 5-year: 0.76). IPRO stratified patients within TNM stages, discriminating between highest- and lowest-risk quintiles in stages I (HR: 8.60), II (HR: 5.03), III (HR: 3.18), and IV (HR: 1.91). CONCLUSION Deep learning applied to pretreatment CT combined with TNM staging enhances prognostication and risk stratification in patients with lung cancer.
Collapse
Affiliation(s)
- Felipe Soares Torres
- Joint Department of Medical Imaging, Toronto General Hospital, Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | | | - Srinivas Raman
- Princess Margaret Cancer Centre, Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| | - Kazuhiro Yasufuku
- Division of Thoracic Surgery, University Health Network and University of Toronto, Toronto, ON, Canada
| | | | - Ahmed Hosny
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA.,Department of Radiation Oncology, Dana Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | | | - Natasha B Leighl
- Department of Medical Oncology and Hematology, Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.,Department of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
48
|
Zhang Y, Liu M, Hu S, Shen Y, Lan J, Jiang B, de Bock GH, Vliegenthart R, Chen X, Xie X. Development and multicenter validation of chest X-ray radiography interpretations based on natural language processing. COMMUNICATIONS MEDICINE 2021; 1:43. [PMID: 35602222 PMCID: PMC9053275 DOI: 10.1038/s43856-021-00043-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 09/23/2021] [Indexed: 01/01/2023] Open
Abstract
Background Artificial intelligence can assist in interpreting chest X-ray radiography (CXR) data, but large datasets require efficient image annotation. The purpose of this study is to extract CXR labels from diagnostic reports based on natural language processing, train convolutional neural networks (CNNs), and evaluate the classification performance of CNN using CXR data from multiple centers Methods We collected the CXR images and corresponding radiology reports of 74,082 subjects as the training dataset. The linguistic entities and relationships from unstructured radiology reports were extracted by the bidirectional encoder representations from transformers (BERT) model, and a knowledge graph was constructed to represent the association between image labels of abnormal signs and the report text of CXR. Then, a 25-label classification system were built to train and test the CNN models with weakly supervised labeling. Results In three external test cohorts of 5,996 symptomatic patients, 2,130 screening examinees, and 1,804 community clinic patients, the mean AUC of identifying 25 abnormal signs by CNN reaches 0.866 ± 0.110, 0.891 ± 0.147, and 0.796 ± 0.157, respectively. In symptomatic patients, CNN shows no significant difference with local radiologists in identifying 21 signs (p > 0.05), but is poorer for 4 signs (p < 0.05). In screening examinees, CNN shows no significant difference for 17 signs (p > 0.05), but is poorer at classifying nodules (p = 0.013). In community clinic patients, CNN shows no significant difference for 12 signs (p > 0.05), but performs better for 6 signs (p < 0.001). Conclusion We construct and validate an effective CXR interpretation system based on natural language processing. Chest X-rays are accompanied by a report from the radiologist, which contains valuable diagnostic information in text format. Extracting and interpreting information from these reports, such as keywords, is time-consuming, but artificial intelligence (AI) can help with this. Here, we use a type of AI known as natural language processing to extract information about abnormal signs seen on chest X-rays from the corresponding report. We develop and test natural language processing models using data from multiple hospitals and clinics, and show that our models achieve similar performance to interpretation from the radiologists themselves. Our findings suggest that AI might help radiologists to speed up interpretation of chest X-ray reports, which could be useful not only in patient triage and diagnosis but also cataloguing and searching of radiology datasets. Zhang et al. develop a natural language processing approach, based on the BERT model, to extract linguistic information from chest X-ray radiography reports. The authors establish a 25-label classification system for abnormal findings described in the reports and validate their model using data from multiple sites.
Collapse
|
49
|
Tsukioka T, Izumi N, Komatsu H, Inoue H, Matsuda Y, Ito R, Kimura T, Nishiyama N. Surgical Outcomes in Patients With Centrally Located Non-small Cell Lung Cancer. In Vivo 2021; 35:2815-2820. [PMID: 34410973 DOI: 10.21873/invivo.12568] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 06/02/2021] [Accepted: 06/03/2021] [Indexed: 12/26/2022]
Abstract
BACKGROUND/AIM Identification of prognostic factors is helpful in selecting optimal treatment for centrally-located non-small cell lung cancer (NSCLC). This study aimed to detect prognostic factors in patients with centrally-located NSCLC. PATIENTS AND METHODS NSCLCs in the hilar area requiring pneumonectomy or sleeve lobectomy for complete removal are defined as centrally-located NSCLCs. We retrospectively investigated the clinical courses of 45 patients with such lesions. RESULTS Sleeve lobectomies were performed on 33 patients and pneumonectomies on 12. Three and five-year survival rates were 72% and 62%, respectively. Presence of comorbidities (p=0.013), severe symptoms (p=0.001), high white cell count (p=0.001), and pathological T3-4 stage (p=0.004) were identified as independent predictors of poor prognosis. Operative procedures did not correlate with outcomes (p=0.722). CONCLUSION Presence of comorbidities, severe symptoms, high white cell counts, and pathological T stage are independent predictors of poor prognosis. These data can contribute in selecting appropriate treatments for such lesions.
Collapse
Affiliation(s)
- Takuma Tsukioka
- Department of Thoracic Surgery, Osaka City University, Osaka, Japan
| | - Nobuhiro Izumi
- Department of Thoracic Surgery, Osaka City University, Osaka, Japan
| | - Hiroaki Komatsu
- Department of Thoracic Surgery, Osaka City University, Osaka, Japan
| | - Hidetoshi Inoue
- Department of Thoracic Surgery, Osaka City University, Osaka, Japan
| | - Yumi Matsuda
- Department of Thoracic Surgery, Osaka City University, Osaka, Japan
| | - Ryuichi Ito
- Department of Thoracic Surgery, Osaka City University, Osaka, Japan
| | - Takuya Kimura
- Department of Thoracic Surgery, Osaka City University, Osaka, Japan
| | | |
Collapse
|
50
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 98] [Impact Index Per Article: 32.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|