1
|
Singh T, Mishra S, Kalra R, Satakshi, Kumar M, Kim T. COVID-19 severity detection using chest X-ray segmentation and deep learning. Sci Rep 2024; 14:19846. [PMID: 39191941 PMCID: PMC11349901 DOI: 10.1038/s41598-024-70801-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 08/21/2024] [Indexed: 08/29/2024] Open
Abstract
COVID-19 has resulted in a significant global impact on health, the economy, education, and daily life. The disease can range from mild to severe, with individuals over 65 or those with underlying medical conditions being more susceptible to severe illness. Early testing and isolation are vital due to the virus's variable incubation period. Chest radiographs (CXR) have gained importance as a diagnostic tool due to their efficiency and reduced radiation exposure compared to CT scans. However, the sensitivity of CXR in detecting COVID-19 may be lower. This paper introduces a deep learning framework for accurate COVID-19 classification and severity prediction using CXR images. U-Net is used for lung segmentation, achieving a precision of 0.9924. Classification is performed using a Convulation-capsule network, with high true positive rates of 86% for COVID-19, 93% for pneumonia, and 85% for normal cases. Severity assessment employs ResNet50, VGG-16, and DenseNet201, with DenseNet201 showing superior accuracy. Empirical results, validated with 95% confidence intervals, confirm the framework's reliability and robustness. This integration of advanced deep learning techniques with radiological imaging enhances early detection and severity assessment, improving patient management and resource allocation in clinical settings.
Collapse
Affiliation(s)
- Tinku Singh
- School of Information and Communication Engineering, Chungbuk National University, Cheongju, South Korea
| | - Suryanshi Mishra
- Department of Mathematics & Statistics, SHUATS, Prayagraj, Uttar Pradesh, India
| | - Riya Kalra
- Indian Institute of Information Technology Allahabad, Prayagraj, Uttar Pradesh, India
| | - Satakshi
- Department of Mathematics & Statistics, SHUATS, Prayagraj, Uttar Pradesh, India
| | - Manish Kumar
- Indian Institute of Information Technology Allahabad, Prayagraj, Uttar Pradesh, India
| | - Taehong Kim
- School of Information and Communication Engineering, Chungbuk National University, Cheongju, South Korea.
| |
Collapse
|
2
|
Cysneiros A, Galvão T, Domingues N, Jorge P, Bento L, Martin-Loeches I. ARDS Mortality Prediction Model Using Evolving Clinical Data and Chest Radiograph Analysis. Biomedicines 2024; 12:439. [PMID: 38398041 PMCID: PMC10886631 DOI: 10.3390/biomedicines12020439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 02/12/2024] [Accepted: 02/14/2024] [Indexed: 02/25/2024] Open
Abstract
INTRODUCTION Within primary ARDS, SARS-CoV-2-associated ARDS (C-ARDS) emerged in late 2019, reaching its peak during the subsequent two years. Recent efforts in ARDS research have concentrated on phenotyping this heterogeneous syndrome to enhance comprehension of its pathophysiology. METHODS AND RESULTS A retrospective study was conducted on C-ARDS patients from April 2020 to February 2021, encompassing 110 participants with a mean age of 63.2 ± 11.92 (26-83 years). Of these, 61.2% (68) were male, and 25% (17) experienced severe ARDS, resulting in a mortality rate of 47.3% (52). Ventilation settings, arterial blood gases, and chest X-ray (CXR) were evaluated on the first day of invasive mechanical ventilation and between days two and three. CXR images were scrutinized using a convolutional neural network (CNN). A binary logistic regression model for predicting C-ARDS mortality was developed based on the most influential variables: age, PaO2/FiO2 ratio (P/F) on days one and three, CNN-extracted CXR features, and age. Initial performance assessment on test data (23 patients out of the 110) revealed an area under the receiver operating characteristic (ROC) curve of 0.862 with a 95% confidence interval (0.654-0.969). CONCLUSION Integrating data available in all intensive care units enables the prediction of C-ARDS mortality by utilizing evolving P/F ratios and CXR. This approach can assist in tailoring treatment plans and initiating early discussions to escalate care and extracorporeal life support. Machine learning algorithms for imaging classification can uncover otherwise inaccessible patterns, potentially evolving into another form of ARDS phenotyping. The combined features of these algorithms and clinical variables demonstrate superior performance compared to either element alone.
Collapse
Affiliation(s)
- Ana Cysneiros
- Nova Medical School, Universidade de Lisboa, 1649-004 Lisbon, Portugal;
- Unidade de Urgência Médica, Hospital de São José, Centro Hospitalar Universitário Lisboa Central, 1169-050 Lisbon, Portugal
| | - Tiago Galvão
- Instituto Politécnico de Lisboa/Instituto Superior de Engenharia de Lisboa, 1959-007 Lisbon, Portugal; (T.G.); (N.D.); (P.J.)
| | - Nuno Domingues
- Instituto Politécnico de Lisboa/Instituto Superior de Engenharia de Lisboa, 1959-007 Lisbon, Portugal; (T.G.); (N.D.); (P.J.)
| | - Pedro Jorge
- Instituto Politécnico de Lisboa/Instituto Superior de Engenharia de Lisboa, 1959-007 Lisbon, Portugal; (T.G.); (N.D.); (P.J.)
| | - Luis Bento
- Nova Medical School, Universidade de Lisboa, 1649-004 Lisbon, Portugal;
- Unidade de Urgência Médica, Hospital de São José, Centro Hospitalar Universitário Lisboa Central, 1169-050 Lisbon, Portugal
| | | |
Collapse
|
3
|
Sobiecki A, Hadjiiski LM, Chan HP, Samala RK, Zhou C, Stojanovska J, Agarwal PP. Detection of Severe Lung Infection on Chest Radiographs of COVID-19 Patients: Robustness of AI Models across Multi-Institutional Data. Diagnostics (Basel) 2024; 14:341. [PMID: 38337857 PMCID: PMC10855789 DOI: 10.3390/diagnostics14030341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/24/2024] [Accepted: 01/30/2024] [Indexed: 02/12/2024] Open
Abstract
The diagnosis of severe COVID-19 lung infection is important because it carries a higher risk for the patient and requires prompt treatment with oxygen therapy and hospitalization while those with less severe lung infection often stay on observation. Also, severe infections are more likely to have long-standing residual changes in their lungs and may need follow-up imaging. We have developed deep learning neural network models for classifying severe vs. non-severe lung infections in COVID-19 patients on chest radiographs (CXR). A deep learning U-Net model was developed to segment the lungs. Inception-v1 and Inception-v4 models were trained for the classification of severe vs. non-severe COVID-19 infection. Four CXR datasets from multi-country and multi-institutional sources were used to develop and evaluate the models. The combined dataset consisted of 5748 cases and 6193 CXR images with physicians' severity ratings as reference standard. The area under the receiver operating characteristic curve (AUC) was used to evaluate model performance. We studied the reproducibility of classification performance using the different combinations of training and validation data sets. We also evaluated the generalizability of the trained deep learning models using both independent internal and external test sets. The Inception-v1 based models achieved AUC ranging between 0.81 ± 0.02 and 0.84 ± 0.0, while the Inception-v4 models achieved AUC in the range of 0.85 ± 0.06 and 0.89 ± 0.01, on the independent test sets, respectively. These results demonstrate the promise of using deep learning models in differentiating COVID-19 patients with severe from non-severe lung infection on chest radiographs.
Collapse
Affiliation(s)
- André Sobiecki
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Lubomir M. Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Ravi K. Samala
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, USA;
| | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | | | - Prachi P. Agarwal
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| |
Collapse
|
4
|
Shin H, Kim T, Park J, Raj H, Jabbar MS, Abebaw ZD, Lee J, Van CC, Kim H, Shin D. Pulmonary abnormality screening on chest x-rays from different machine specifications: a generalized AI-based image manipulation pipeline. Eur Radiol Exp 2023; 7:68. [PMID: 37940797 PMCID: PMC10632317 DOI: 10.1186/s41747-023-00386-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 09/12/2023] [Indexed: 11/10/2023] Open
Abstract
BACKGROUND Chest x-ray is commonly used for pulmonary abnormality screening. However, since the image characteristics of x-rays highly depend on the machine specifications, an artificial intelligence (AI) model developed for specific equipment usually fails when clinically applied to various machines. To overcome this problem, we propose an image manipulation pipeline. METHODS A total of 15,010 chest x-rays from systems with different generators/detectors were retrospectively collected from five institutions from May 2020 to February 2021. We developed an AI model to classify pulmonary abnormalities using x-rays from a single system. Then, we externally tested its performance on chest x-rays from various machine specifications. We compared the area under the receiver operating characteristics curve (AUC) of AI models developed using conventional image processing pipelines (histogram equalization [HE], contrast-limited histogram equalization [CLAHE], and unsharp masking [UM] with common data augmentations) with that of the proposed manipulation pipeline (XM-pipeline). RESULTS The XM-pipeline model showed the highest performance for all the datasets of different machine specifications, such as chest x-rays acquired from a computed radiography system (n = 356, AUC 0.944 for XM-pipeline versus 0.917 for HE, 0.705 for CLAHE, 0.544 for UM, p [Formula: see text] 0.001, for all) and from a mobile x-ray generator (n = 204, AUC 0.949 for XM-pipeline versus 0.933 for HE, p = 0.042, 0.932 for CLAHE (p = 0.009), 0.925 for UM (p = 0.001). CONCLUSIONS Applying the XM-pipeline to AI training increased the diagnostic performance of the AI model on the chest x-rays of different machine configurations. RELEVANCE STATEMENT The proposed training pipeline would successfully promote a wide application of the AI model for abnormality screening when chest x-rays are acquired using various x-ray machines. KEY POINTS • AI models developed using x-rays of a specific machine suffer from generalization. • We proposed a new image processing pipeline to address the generalization problem. • AI models were tested using multicenter external x-ray datasets of various machines. • AI with our pipeline achieved the highest diagnostic performance than conventional methods.
Collapse
Affiliation(s)
- Heejun Shin
- Artificial Intelligence Engineering Division, RadiSen Co., Ltd, Seoul, Korea
| | - Taehee Kim
- Artificial Intelligence Engineering Division, RadiSen Co., Ltd, Seoul, Korea
| | - Juhyung Park
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, Korea
| | - Hruthvik Raj
- Artificial Intelligence Engineering Division, RadiSen Co., Ltd, Seoul, Korea
| | | | | | - Jongho Lee
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, Korea
| | - Cong Cung Van
- Department of Radiology, National Lung Hospital, Hanoi, Vietnam
| | - Hyungjin Kim
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
| | - Dongmyung Shin
- Artificial Intelligence Engineering Division, RadiSen Co., Ltd, Seoul, Korea.
| |
Collapse
|
5
|
Dang LM, Nadeem M, Nguyen TN, Park HY, Lee ON, Song HK, Moon H. VPBR: An Automatic and Low-Cost Vision-Based Biophysical Properties Recognition Pipeline for Pumpkin. PLANTS (BASEL, SWITZERLAND) 2023; 12:2647. [PMID: 37514261 PMCID: PMC10386610 DOI: 10.3390/plants12142647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 07/08/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023]
Abstract
Pumpkins are a nutritious and globally enjoyed fruit for their rich and earthy flavor. The biophysical properties of pumpkins play an important role in determining their yield. However, manual in-field techniques for monitoring these properties can be time-consuming and labor-intensive. To address this, this research introduces a novel approach that feeds high-resolution pumpkin images to train a mathematical model to automate the measurement of each pumpkin's biophysical properties. Color correction was performed on the dataset using a color-checker panel to minimize the impact of varying light conditions on the RGB images. A segmentation model was then trained to effectively recognize two fundamental components of each pumpkin: the fruit and vine. Real-life measurements of various biophysical properties, including fruit length, fruit width, stem length, stem width and fruit peel color, were computed and compared with manual measurements. The experimental results on 10 different pumpkin samples revealed that the framework obtained a small average mean absolute percentage error (MAPE) of 2.5% compared to the manual method, highlighting the potential of this approach as a faster and more efficient alternative to conventional techniques for monitoring the biophysical properties of pumpkins.
Collapse
Affiliation(s)
- L. Minh Dang
- Department of Information and Communication Engineering, and Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea; (L.M.D.); (H.-K.S.)
| | - Muhammad Nadeem
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea;
| | - Tan N. Nguyen
- Department of Architectural Engineering, Sejong University, Seoul 05006, Republic of Korea;
| | - Han Yong Park
- Department of Bioresource Engineering, Sejong University, Seoul 05006, Republic of Korea; (H.Y.P.)
| | - O New Lee
- Department of Bioresource Engineering, Sejong University, Seoul 05006, Republic of Korea; (H.Y.P.)
| | - Hyoung-Kyu Song
- Department of Information and Communication Engineering, and Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic of Korea; (L.M.D.); (H.-K.S.)
| | - Hyeonjoon Moon
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea;
| |
Collapse
|
6
|
Yoon MS, Kwon G, Oh J, Ryu J, Lim J, Kang BK, Lee J, Han DK. Effect of Contrast Level and Image Format on a Deep Learning Algorithm for the Detection of Pneumothorax with Chest Radiography. J Digit Imaging 2023; 36:1237-1247. [PMID: 36698035 PMCID: PMC10287877 DOI: 10.1007/s10278-022-00772-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 01/26/2023] Open
Abstract
Under the black-box nature in the deep learning model, it is uncertain how the change in contrast level and format affects the performance. We aimed to investigate the effect of contrast level and image format on the effectiveness of deep learning for diagnosing pneumothorax on chest radiographs. We collected 3316 images (1016 pneumothorax and 2300 normal images), and all images were set to the standard contrast level (100%) and stored in the Digital Imaging and Communication in Medicine and Joint Photographic Experts Group (JPEG) formats. Data were randomly separated into 80% of training and 20% of test sets, and the contrast of images in the test set was changed to 5 levels (50%, 75%, 100%, 125%, and 150%). We trained the model to detect pneumothorax using ResNet-50 with 100% level images and tested with 5-level images in the two formats. While comparing the overall performance between each contrast level in the two formats, the area under the receiver-operating characteristic curve (AUC) was significantly different (all p < 0.001) except between 125 and 150% in JPEG format (p = 0.382). When comparing the two formats at same contrast levels, AUC was significantly different (all p < 0.001) except 50% and 100% (p = 0.079 and p = 0.082, respectively). The contrast level and format of medical images could influence the performance of the deep learning model. It is required to train with various contrast levels and formats of image, and further image processing for improvement and maintenance of the performance.
Collapse
Affiliation(s)
- Myeong Seong Yoon
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Department of Radiological Science, Eulji University, 553 Sanseong-daero, Seongnam-si, Gyeonggi Do, 13135, Republic of Korea
| | - Gitaek Kwon
- Department of Computer Science, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- VUNO, Inc, 479 Gangnam-daero, Seocho-gu, Seoul, 06541, Republic of Korea
| | - Jaehoon Oh
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea.
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea.
| | - Jongbin Ryu
- Department of Software and Computer Engineering, Ajou University, 206 World cup-ro, Suwon-si, Gyeonggi Do, 16499, Republic of Korea.
| | - Jongwoo Lim
- Department of Computer Science, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
| | - Bo-Kyeong Kang
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Department of Radiology, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
| | - Juncheol Lee
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
| | - Dong-Kyoon Han
- Department of Radiological Science, Eulji University, 553 Sanseong-daero, Seongnam-si, Gyeonggi Do, 13135, Republic of Korea
| |
Collapse
|
7
|
Safdar MF, Nowak RM, Pałka P. A Denoising and Fourier Transformation-Based Spectrograms in ECG Classification Using Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2022; 22:9576. [PMID: 36559944 PMCID: PMC9780813 DOI: 10.3390/s22249576] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 11/18/2022] [Accepted: 12/02/2022] [Indexed: 06/17/2023]
Abstract
The non-invasive electrocardiogram (ECG) signals are useful in heart condition assessment and are found helpful in diagnosing cardiac diseases. However, traditional ways, i.e., a medical consultation required effort, knowledge, and time to interpret the ECG signals due to the large amount of data and complexity. Neural networks have been shown to be efficient recently in interpreting the biomedical signals including ECG and EEG. The novelty of the proposed work is using spectrograms instead of raw signals. Spectrograms could be easily reduced by eliminating frequencies with no ECG information. Moreover, spectrogram calculation is time-efficient through short-time Fourier transformation (STFT) which allowed to present reduced data with well-distinguishable form to convolutional neural network (CNN). The data reduction was performed through frequency filtration by taking a specific cutoff value. These steps makes architecture of the CNN model simple which showed high accuracy. The proposed approach reduced memory usage and computational power through not using complex CNN models. A large publicly available PTB-XL dataset was utilized, and two datasets were prepared, i.e., spectrograms and raw signals for binary classification. The highest accuracy of 99.06% was achieved by the proposed approach, which reflects spectrograms are better than the raw signals for ECG classification. Further, up- and down-sampling of the signals were also performed at various sampling rates and accuracies were attained.
Collapse
Affiliation(s)
- Muhammad Farhan Safdar
- Institute of Computer Science, Faculty of Electronics and Information Technology, Warsaw University of Technology, 00-665 Warsaw, Poland
| | | | | |
Collapse
|
8
|
Moon S, Lee JH, Choi H, Lee SY, Lee J. Deep learning approaches to predict 10-2 visual field from wide-field swept-source optical coherence tomography en face images in glaucoma. Sci Rep 2022; 12:21041. [PMID: 36471039 PMCID: PMC9722778 DOI: 10.1038/s41598-022-25660-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 12/02/2022] [Indexed: 12/12/2022] Open
Abstract
Close monitoring of central visual field (VF) defects with 10-2 VF helps prevent blindness in glaucoma. We aimed to develop a deep learning model to predict 10-2 VF from wide-field swept-source optical coherence tomography (SS-OCT) images. Macular ganglion cell/inner plexiform layer thickness maps with either wide-field en face images (en face model) or retinal nerve fiber layer thickness maps (RNFLT model) were extracted, combined, and preprocessed. Inception-ResNet-V2 was trained to predict 10-2 VF from combined images. Estimation performance was evaluated using mean absolute error (MAE) between actual and predicted threshold values, and the two models were compared with different input data. The training dataset comprised paired 10-2 VF and SS-OCT images of 3,025 eyes of 1,612 participants and the test dataset of 337 eyes of 186 participants. Global prediction errors (MAEpoint-wise) were 3.10 and 3.17 dB for the en face and RNFLT models, respectively. The en face model performed better than the RNFLT model in superonasal and inferonasal sectors (P = 0.011 and P = 0.030). Prediction errors were smaller in the inferior versus superior hemifields for both models. The deep learning model effectively predicted 10-2 VF from wide-field SS-OCT images and might help clinicians efficiently individualize the frequency of 10-2 VF in clinical practice.
Collapse
Affiliation(s)
- Sangwoo Moon
- grid.262229.f0000 0001 0719 8572Department of Ophthalmology, Pusan National University College of Medicine, Busan, 49241 Korea ,grid.412588.20000 0000 8611 7824Biomedical Research Institute, Pusan National University Hospital, Busan, 49241 Korea
| | - Jae Hyeok Lee
- Department of Medical AI, Deepnoid Inc, Seoul, 08376 Korea
| | - Hyunju Choi
- Department of Medical AI, Deepnoid Inc, Seoul, 08376 Korea
| | - Sun Yeop Lee
- Department of Medical AI, Deepnoid Inc, Seoul, 08376 Korea
| | - Jiwoong Lee
- grid.262229.f0000 0001 0719 8572Department of Ophthalmology, Pusan National University College of Medicine, Busan, 49241 Korea ,grid.412588.20000 0000 8611 7824Biomedical Research Institute, Pusan National University Hospital, Busan, 49241 Korea
| |
Collapse
|
9
|
Kang M, An TJ, Han D, Seo W, Cho K, Kim S, Myong JP, Han SW. Development of a multipotent diagnostic tool for chest X-rays by multi-object detection method. Sci Rep 2022; 12:19130. [PMID: 36352008 PMCID: PMC9646869 DOI: 10.1038/s41598-022-21841-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 10/04/2022] [Indexed: 11/11/2022] Open
Abstract
The computer-aided diagnosis (CAD) for chest X-rays was developed more than 50 years ago. However, there are still unmet needs for its versatile use in our medical fields. We planned this study to develop a multipotent CAD model suitable for general use including in primary care areas. We planned this study to solve the problem by using computed tomography (CT) scan with its one-to-one matched chest X-ray dataset. The data was extracted and preprocessed by pulmonology experts by using the bounding boxes to locate lesions of interest. For detecting multiple lesions, multi-object detection by faster R-CNN and by RetinaNet was adopted and compared. A total of twelve diagnostic labels were defined as the followings: pleural effusion, atelectasis, pulmonary nodule, cardiomegaly, consolidation, emphysema, pneumothorax, chemo-port, bronchial wall thickening, reticular opacity, pleural thickening, and bronchiectasis. The Faster R-CNN model showed higher overall sensitivity than RetinaNet, nevertheless the values of specificity were opposite. Some values such as cardiomegaly and chemo-port showed excellent sensitivity (100.0%, both). Others showed that the unique results such as bronchial wall thickening, reticular opacity, and pleural thickening can be described in the chest area. As far as we know, this is the first study to develop an object detection model for chest X-rays based on chest area defined by CT scans in one-to-one matched manner, preprocessed and conducted by a group of experts in pulmonology. Our model can be a potential tool for detecting the whole chest area with multiple diagnoses from a simple X-ray that is routinely taken in most clinics and hospitals on daily basis.
Collapse
Affiliation(s)
- Minji Kang
- grid.222754.40000 0001 0840 2678School of Industrial and Management Engineering, Korea University, Anam-ro 145, Seongbuk-gu, Seoul, 02841 Korea
| | - Tai Joon An
- grid.411947.e0000 0004 0470 4224Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | | | - Wan Seo
- grid.411947.e0000 0004 0470 4224Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Kangwon Cho
- Division of Pulmonary, Allergy, and Critical Care Medicine, Department of Internal Medicine, Changwon Fatima Hospital, Changwon, Korea
| | - Shinbum Kim
- Division of Pulmonary, Allergy, and Critical Care Medicine, Department of Internal Medicine, Andong Sungso Hospital, Andong, Korea
| | - Jun-Pyo Myong
- grid.411947.e0000 0004 0470 4224Department of Occupational and Environmental Medicine, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Banpodae-ro 222, Seocho-gu, Seoul, 06591 Korea
| | - Sung Won Han
- grid.222754.40000 0001 0840 2678School of Industrial and Management Engineering, Korea University, Anam-ro 145, Seongbuk-gu, Seoul, 02841 Korea
| |
Collapse
|
10
|
Chiu HY, Peng RHT, Lin YC, Wang TW, Yang YX, Chen YY, Wu MH, Shiao TH, Chao HS, Chen YM, Wu YT. Artificial Intelligence for Early Detection of Chest Nodules in X-ray Images. Biomedicines 2022; 10:2839. [PMID: 36359360 PMCID: PMC9687210 DOI: 10.3390/biomedicines10112839] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 11/02/2022] [Accepted: 11/04/2022] [Indexed: 09/06/2024] Open
Abstract
Early detection increases overall survival among patients with lung cancer. This study formulated a machine learning method that processes chest X-rays (CXRs) to detect lung cancer early. After we preprocessed our dataset using monochrome and brightness correction, we used different kinds of preprocessing methods to enhance image contrast and then used U-net to perform lung segmentation. We used 559 CXRs with a single lung nodule labeled by experts to train a You Only Look Once version 4 (YOLOv4) deep-learning architecture to detect lung nodules. In a testing dataset of 100 CXRs from patients at Taipei Veterans General Hospital and 154 CXRs from the Japanese Society of Radiological Technology dataset, the sensitivity of the AI model using a combination of different preprocessing methods performed the best at 79%, with 3.04 false positives per image. We then tested the AI by using 383 sets of CXRs obtained in the past 5 years prior to lung cancer diagnoses. The median time from detection to diagnosis for radiologists assisted with AI was 46 (3-523) days, longer than that for radiologists (8 (0-263) days). The AI model can assist radiologists in the early detection of lung nodules.
Collapse
Affiliation(s)
- Hwa-Yen Chiu
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Division of Internal Medicine, Hsinchu Branch, Taipei Veterans General Hospital, Hsinchu 310, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Rita Huan-Ting Peng
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yi-Chian Lin
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Ting-Wei Wang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Ya-Xuan Yang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Ying-Ying Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Department of Critical Care Medicine, Taiwan Adventist Hospital, Taipei 105, Taiwan
| | - Mei-Han Wu
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Department of Medical Imaging, Cheng Hsin General Hospital, Taipei 112, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei 112, Taiwan
| | - Tsu-Hui Shiao
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Heng-Sheng Chao
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yuh-Min Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| |
Collapse
|
11
|
An efficient lung disease classification from X-ray images using hybrid Mask-RCNN and BiDLSTM. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
12
|
A Novel Lightweight Approach to COVID-19 Diagnostics Based on Chest X-ray Images. J Clin Med 2022; 11:jcm11195501. [PMID: 36233368 PMCID: PMC9571927 DOI: 10.3390/jcm11195501] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 09/03/2022] [Accepted: 09/15/2022] [Indexed: 11/22/2022] Open
Abstract
Background: This paper presents a novel lightweight approach based on machine learning methods supporting COVID-19 diagnostics based on X-ray images. The presented schema offers effective and quick diagnosis of COVID-19. Methods: Real data (X-ray images) from hospital patients were used in this study. All labels, namely those that were COVID-19 positive and negative, were confirmed by a PCR test. Feature extraction was performed using a convolutional neural network, and the subsequent classification of samples used Random Forest, XGBoost, LightGBM and CatBoost. Results: The LightGBM model was the most effective in classifying patients on the basis of features extracted from X-ray images, with an accuracy of 1.00, a precision of 1.00, a recall of 1.00 and an F1-score of 1.00. Conclusion: The proposed schema can potentially be used as a support for radiologists to improve the diagnostic process. The presented approach is efficient and fast. Moreover, it is not excessively complex computationally.
Collapse
|