1
|
Hresko DJ, Drotar P, Ngo QC, Kumar DK. Enhanced Domain Adaptation for Foot Ulcer Segmentation Through Mixing Self-Trained Weak Labels. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:455-466. [PMID: 39020158 PMCID: PMC11810871 DOI: 10.1007/s10278-024-01193-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 05/04/2024] [Accepted: 05/22/2024] [Indexed: 07/19/2024]
Abstract
Wound management requires the measurement of the wound parameters such as its shape and area. However, computerized analysis of the wound suffers the challenge of inexact segmentation of the wound images due to limited or inaccurate labels. It is a common scenario that the source domain provides an abundance of labeled data, while the target domain provides only limited labels. To overcome this, we propose a novel approach that combines self-training learning and mixup augmentation. The neural network is trained on the source domain to generate weak labels on the target domain via the self-training process. In the second stage, generated labels are mixed up with labels from the source domain to retrain the neural network and enhance generalization across diverse datasets. The efficacy of our approach was evaluated using the DFUC 2022, FUSeg, and RMIT datasets, demonstrating substantial improvements in segmentation accuracy and robustness across different data distributions. Specifically, in single-domain experiments, segmentation on the DFUC 2022 dataset scored a dice score of 0.711, while the score on the FUSeg dataset achieved 0.859. For domain adaptation, when these datasets were used as target datasets, the dice scores were 0.714 for DFUC 2022 and 0.561 for FUSeg.
Collapse
Affiliation(s)
- David Jozef Hresko
- IISLab, Technical University of Kosice, Letna 1/9, Kosice, 04200, Kosicky Kraj, Slovakia
| | - Peter Drotar
- IISLab, Technical University of Kosice, Letna 1/9, Kosice, 04200, Kosicky Kraj, Slovakia.
| | - Quoc Cuong Ngo
- School of Engineering, RMIT University, 80/445 Swanston St, Melbourne, 3000, VIC, Australia
| | - Dinesh Kant Kumar
- School of Engineering, RMIT University, 80/445 Swanston St, Melbourne, 3000, VIC, Australia
| |
Collapse
|
2
|
Prathaban K, Hande MP. Transforming Healthcare: Artificial Intelligence (AI) Applications in Medical Imaging and Drug Response Prediction. Genome Integr 2025; 15:e20240002. [PMID: 39845982 PMCID: PMC11752870 DOI: 10.14293/genint.15.1.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2025] Open
Abstract
Artificial intelligence (AI) offers a broad range of enhancements in medicine. Machine learning and deep learning techniques have shown significant potential in improving diagnosis and treatment outcomes, from assisting clinicians in diagnosing medical images to ascertaining effective drugs for a specific disease. Despite the prospective benefits, adopting AI in clinical settings requires careful consideration, particularly concerning data generalisation and model explainability. This commentary aims to discuss two potential use cases for AI in the field of medicine and the overarching challenges involved in their implementation.
Collapse
Affiliation(s)
- Karthik Prathaban
- Department of Physiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - M. Prakash Hande
- Department of Physiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
3
|
Alaoui Abdalaoui Slimani F, Bentourkia M. Improving deep learning U-Net++ by discrete wavelet and attention gate mechanisms for effective pathological lung segmentation in chest X-ray imaging. Phys Eng Sci Med 2024:10.1007/s13246-024-01489-8. [PMID: 39495449 DOI: 10.1007/s13246-024-01489-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 10/09/2024] [Indexed: 11/05/2024]
Abstract
Since its introduction in 2015, the U-Net architecture used in Deep Learning has played a crucial role in medical imaging. Recognized for its ability to accurately discriminate small structures, the U-Net has received more than 2600 citations in academic literature, which motivated continuous enhancements to its architecture. In hospitals, chest radiography is the primary diagnostic method for pulmonary disorders, however, accurate lung segmentation in chest X-ray images remains a challenging task, primarily due to the significant variations in lung shapes and the presence of intense opacities caused by various diseases. This article introduces a new approach for the segmentation of lung X-ray images. Traditional max-pooling operations, commonly employed in conventional U-Net++ models, were replaced with the discrete wavelet transform (DWT), offering a more accurate down-sampling technique that potentially captures detailed features of lung structures. Additionally, we used attention gate (AG) mechanisms that enable the model to focus on specific regions in the input image, which improves the accuracy of the segmentation process. When compared with current techniques like Atrous Convolutions, Improved FCN, Improved SegNet, U-Net, and U-Net++, our method (U-Net++-DWT) showed remarkable efficacy, particularly on the Japanese Society of Radiological Technology dataset, achieving an accuracy of 99.1%, specificity of 98.9%, sensitivity of 97.8%, Dice Coefficient of 97.2%, and Jaccard Index of 96.3%. Its performance on the Montgomery County dataset further demonstrated its consistent effectiveness. Moreover, when applied to additional datasets of Chest X-ray Masks and Labels and COVID-19, our method maintained high performance levels, achieving up to 99.3% accuracy, thereby underscoring its adaptability and potential for broad applications in medical imaging diagnostics.
Collapse
Affiliation(s)
| | - M'hamed Bentourkia
- Department of Nuclear Medicine and Radiobiology, 12th Avenue North, 3001, Sherbrooke, QC, J1H5N4, Canada.
| |
Collapse
|
4
|
Xu Y, Quan R, Xu W, Huang Y, Chen X, Liu F. Advances in Medical Image Segmentation: A Comprehensive Review of Traditional, Deep Learning and Hybrid Approaches. Bioengineering (Basel) 2024; 11:1034. [PMID: 39451409 PMCID: PMC11505408 DOI: 10.3390/bioengineering11101034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 10/08/2024] [Accepted: 10/11/2024] [Indexed: 10/26/2024] Open
Abstract
Medical image segmentation plays a critical role in accurate diagnosis and treatment planning, enabling precise analysis across a wide range of clinical tasks. This review begins by offering a comprehensive overview of traditional segmentation techniques, including thresholding, edge-based methods, region-based approaches, clustering, and graph-based segmentation. While these methods are computationally efficient and interpretable, they often face significant challenges when applied to complex, noisy, or variable medical images. The central focus of this review is the transformative impact of deep learning on medical image segmentation. We delve into prominent deep learning architectures such as Convolutional Neural Networks (CNNs), Fully Convolutional Networks (FCNs), U-Net, Recurrent Neural Networks (RNNs), Adversarial Networks (GANs), and Autoencoders (AEs). Each architecture is analyzed in terms of its structural foundation and specific application to medical image segmentation, illustrating how these models have enhanced segmentation accuracy across various clinical contexts. Finally, the review examines the integration of deep learning with traditional segmentation methods, addressing the limitations of both approaches. These hybrid strategies offer improved segmentation performance, particularly in challenging scenarios involving weak edges, noise, or inconsistent intensities. By synthesizing recent advancements, this review provides a detailed resource for researchers and practitioners, offering valuable insights into the current landscape and future directions of medical image segmentation.
Collapse
Affiliation(s)
- Yan Xu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Rixiang Quan
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Weiting Xu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Yi Huang
- Bristol Medical School, University of Bristol, Bristol BS8 1UD, UK;
| | - Xiaolong Chen
- Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, Nottingham NG7 2RD, UK;
| | - Fengyuan Liu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| |
Collapse
|
5
|
Rai S, Bhatt JS, Patra SK. An AI-Based Low-Risk Lung Health Image Visualization Framework Using LR-ULDCT. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2047-2062. [PMID: 38491236 PMCID: PMC11522248 DOI: 10.1007/s10278-024-01062-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/18/2024] [Accepted: 02/12/2024] [Indexed: 03/18/2024]
Abstract
In this article, we propose an AI-based low-risk visualization framework for lung health monitoring using low-resolution ultra-low-dose CT (LR-ULDCT). We present a novel deep cascade processing workflow to achieve diagnostic visualization on LR-ULDCT (<0.3 mSv) at par high-resolution CT (HRCT) of 100 mSV radiation technology. To this end, we build a low-risk and affordable deep cascade network comprising three sequential deep processes: restoration, super-resolution (SR), and segmentation. Given degraded LR-ULDCT, the first novel network unsupervisedly learns restoration function from augmenting patch-based dictionaries and residuals. The restored version is then super-resolved (SR) for target (sensor) resolution. Here, we combine perceptual and adversarial losses in novel GAN to establish the closeness between probability distributions of generated SR-ULDCT and restored LR-ULDCT. Thus SR-ULDCT is presented to the segmentation network that first separates the chest portion from SR-ULDCT followed by lobe-wise colorization. Finally, we extract five lobes to account for the presence of ground glass opacity (GGO) in the lung. Hence, our AI-based system provides low-risk visualization of input degraded LR-ULDCT to various stages, i.e., restored LR-ULDCT, restored SR-ULDCT, and segmented SR-ULDCT, and achieves diagnostic power of HRCT. We perform case studies by experimenting on real datasets of COVID-19, pneumonia, and pulmonary edema/congestion while comparing our results with state-of-the-art. Ablation experiments are conducted for better visualizing different operating pipelines. Finally, we present a verification report by fourteen (14) experienced radiologists and pulmonologists.
Collapse
Affiliation(s)
- Swati Rai
- Indian Institute of Information Technology Vadodara, Vadodara, India.
| | - Jignesh S Bhatt
- Indian Institute of Information Technology Vadodara, Vadodara, India
| | | |
Collapse
|
6
|
Zhang C, Zhu S, Yuan Y, Dai S. Comparison Between Endobronchial-Guided Transbronchial Biopsy and Computed Tomography-Guided Transthoracic Lung Biopsy for the Diagnosis of Central Pulmonary Lesions. THE CLINICAL RESPIRATORY JOURNAL 2024; 18:e70015. [PMID: 39314190 PMCID: PMC11420531 DOI: 10.1111/crj.70015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Revised: 09/04/2024] [Accepted: 09/06/2024] [Indexed: 09/25/2024]
Abstract
BACKGROUND Lung cancer is one of the most common malignant tumors at present. This study aimed to compare the diagnostic accuracy, complication rates, and predictive values of computed tomography (CT)-guided percutaneous transthoracic needle biopsy (PTNB) and electronic bronchoscopy-guided transbronchial lung biopsy (TBLB) for patients with central pulmonary lesions (CPLs) with a diameter ≥ 3 cm. METHODS We retrospectively included 110 patients with CPLs with a diameter ≥ 3 cm who underwent preoperative PTNB and TBLB examinations and ultimately underwent surgery to remove CPLs and obtained pathological results. Detailed information was collected in order to compare whether there was a difference between two groups. Data were processed using SPSS software (Version 26.0; IBM Corp). Data were compared by t-test or chi-square test. p < 0.05 was considered statistically significant. RESULTS All patients underwent surgical treatment at the department of thoracic surgery and obtained a final pathological diagnosis. The rate of positive predictive value (PPV) was comparable between the two methods, and the negative predictive value (NPV) was significantly higher in the PTNB group compared with the TBLB group (p < 0.05). In addition, PTNB was more sensitive and accurate than TBLB (p < 0.05). However, the PTNB group had a higher probability of complications, and TBLB was a relatively safer examination method. CONCLUSION PTNB demonstrated a higher accuracy and sensitivity than TBLB in the treatment of CPLs with a diameter ≥ 3 cm, but the complication rates of PTNB are relatively high. These methods exhibited different diagnostic accuracies and therefore should be selected based on different medical conditions.
Collapse
Affiliation(s)
- Cheng Zhang
- Thoracic Surgery DepartmentThe Affiliated Hospital of Xuzhou Medical UniversityXuzhouJiangsuChina
| | - Senlin Zhu
- Thoracic Surgery DepartmentThe Affiliated Hospital of Xuzhou Medical UniversityXuzhouJiangsuChina
| | - Yanliang Yuan
- Thoracic Surgery DepartmentThe Affiliated Hospital of Xuzhou Medical UniversityXuzhouJiangsuChina
| | - Shenhui Dai
- Cardiothoracic Surgery DepartmentThe First Affiliated Hospital of Anhui University of Science & TechnologyHuainanAnhuiChina
| |
Collapse
|
7
|
Yang Y, Zheng J, Guo P, Wu T, Gao Q, Guo Y, Chen Z, Liu C, Ouyang Z, Chen H, Kang Y. Automatic cardiothoracic ratio calculation based on lung fields abstracted from chest X-ray images without heart segmentation. Front Physiol 2024; 15:1416912. [PMID: 39175612 PMCID: PMC11338915 DOI: 10.3389/fphys.2024.1416912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Accepted: 07/22/2024] [Indexed: 08/24/2024] Open
Abstract
Introduction The cardiothoracic ratio (CTR) based on postero-anterior chest X-rays (P-A CXR) images is one of the most commonly used cardiac measurement methods and an indicator for initially evaluating cardiac diseases. However, the hearts are not readily observable on P-A CXR images compared to the lung fields. Therefore, radiologists often manually determine the CTR's right and left heart border points of the adjacent left and right lung fields to the heart based on P-A CXR images. Meanwhile, manual CTR measurement based on the P-A CXR image requires experienced radiologists and is time-consuming and laborious. Methods Based on the above, this article proposes a novel, fully automatic CTR calculation method based on lung fields abstracted from the P-A CXR images using convolutional neural networks (CNNs), overcoming the limitations to heart segmentation and avoiding errors in heart segmentation. First, the lung field mask images are abstracted from the P-A CXR images based on the pre-trained CNNs. Second, a novel localization method of the heart's right and left border points is proposed based on the two-dimensional projection morphology of the lung field mask images using graphics. Results The results show that the mean distance errors at the x-axis direction of the CTR's four key points in the test sets T1 (21 × 512 × 512 static P-A CXR images) and T2 (13 × 512 × 512 dynamic P-A CXR images) based on various pre-trained CNNs are 4.1161 and 3.2116 pixels, respectively. In addition, the mean CTR errors on the test sets T1 and T2 based on four proposed models are 0.0208 and 0.0180, respectively. Discussion Our proposed model achieves the equivalent performance of CTR calculation as the previous CardioNet model, overcomes heart segmentation, and takes less time. Therefore, our proposed method is practical and feasible and may become an effective tool for initially evaluating cardiac diseases.
Collapse
Affiliation(s)
- Yingjian Yang
- Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd., Shenzhen, Guangdong, China
| | - Jie Zheng
- Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd., Shenzhen, Guangdong, China
| | - Peng Guo
- Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd., Shenzhen, Guangdong, China
| | - Tianqi Wu
- Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd., Shenzhen, Guangdong, China
| | - Qi Gao
- Neusoft Medical System Co., Ltd., Shenyang, Liaoning, China
| | - Yingwei Guo
- School of Electrical and Information Engineering, Northeast Petroleum University, Daqing, China
| | - Ziran Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chengcheng Liu
- School of Life and Health Management, Shenyang City University, Shenyang, China
| | - Zhanglei Ouyang
- Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd., Shenzhen, Guangdong, China
| | - Huai Chen
- Department of Radiology, The Second Affiliated Hospital of Guangzhou Medical University, GuangzhouChina
| | - Yan Kang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, ShenzhenChina
- School of Applied Technology, Shenzhen University, Shenzhen, China
- Engineering Research Centre of Medical Imaging and Intelligent Analysis, Ministry of Education, Shenyang, China
| |
Collapse
|
8
|
Wang Y, Guo Y, Wang Z, Yu L, Yan Y, Gu Z. Enhancing semantic segmentation in chest X-ray images through image preprocessing: ps-KDE for pixel-wise substitution by kernel density estimation. PLoS One 2024; 19:e0299623. [PMID: 38913621 PMCID: PMC11195943 DOI: 10.1371/journal.pone.0299623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 05/08/2024] [Indexed: 06/26/2024] Open
Abstract
BACKGROUND In medical imaging, the integration of deep-learning-based semantic segmentation algorithms with preprocessing techniques can reduce the need for human annotation and advance disease classification. Among established preprocessing techniques, Contrast Limited Adaptive Histogram Equalization (CLAHE) has demonstrated efficacy in improving segmentation algorithms across various modalities, such as X-rays and CT. However, there remains a demand for improved contrast enhancement methods considering the heterogeneity of datasets and the various contrasts across different anatomic structures. METHOD This study proposes a novel preprocessing technique, ps-KDE, to investigate its impact on deep learning algorithms to segment major organs in posterior-anterior chest X-rays. Ps-KDE augments image contrast by substituting pixel values based on their normalized frequency across all images. We evaluate our approach on a U-Net architecture with ResNet34 backbone pre-trained on ImageNet. Five separate models are trained to segment the heart, left lung, right lung, left clavicle, and right clavicle. RESULTS The model trained to segment the left lung using ps-KDE achieved a Dice score of 0.780 (SD = 0.13), while that of trained on CLAHE achieved a Dice score of 0.717 (SD = 0.19), p<0.01. ps-KDE also appears to be more robust as CLAHE-based models misclassified right lungs in select test images for the left lung model. The algorithm for performing ps-KDE is available at https://github.com/wyc79/ps-KDE. DISCUSSION Our results suggest that ps-KDE offers advantages over current preprocessing techniques when segmenting certain lung regions. This could be beneficial in subsequent analyses such as disease classification and risk stratification.
Collapse
Affiliation(s)
- Yuanchen Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Yujie Guo
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Ziqi Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Linzi Yu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Yujie Yan
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Zifan Gu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
9
|
Guo H, Somayajula SA, Hosseini R, Xie P. Improving image classification of gastrointestinal endoscopy using curriculum self-supervised learning. Sci Rep 2024; 14:6100. [PMID: 38480815 PMCID: PMC10937990 DOI: 10.1038/s41598-024-53955-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/07/2024] [Indexed: 03/17/2024] Open
Abstract
Endoscopy, a widely used medical procedure for examining the gastrointestinal (GI) tract to detect potential disorders, poses challenges in manual diagnosis due to non-specific symptoms and difficulties in accessing affected areas. While supervised machine learning models have proven effective in assisting clinical diagnosis of GI disorders, the scarcity of image-label pairs created by medical experts limits their availability. To address these limitations, we propose a curriculum self-supervised learning framework inspired by human curriculum learning. Our approach leverages the HyperKvasir dataset, which comprises 100k unlabeled GI images for pre-training and 10k labeled GI images for fine-tuning. By adopting our proposed method, we achieved an impressive top-1 accuracy of 88.92% and an F1 score of 73.39%. This represents a 2.1% increase over vanilla SimSiam for the top-1 accuracy and a 1.9% increase for the F1 score. The combination of self-supervised learning and a curriculum-based approach demonstrates the efficacy of our framework in advancing the diagnosis of GI disorders. Our study highlights the potential of curriculum self-supervised learning in utilizing unlabeled GI tract images to improve the diagnosis of GI disorders, paving the way for more accurate and efficient diagnosis in GI endoscopy.
Collapse
Affiliation(s)
- Han Guo
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA
| | - Sai Ashish Somayajula
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA
| | - Ramtin Hosseini
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA
| | - Pengtao Xie
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA.
| |
Collapse
|
10
|
Yang Y, Zheng J, Guo P, Wu T, Gao Q, Zeng X, Chen Z, Zeng N, Ouyang Z, Guo Y, Chen H. Hemi-diaphragm detection of chest X-ray images based on convolutional neural network and graphics. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:1273-1295. [PMID: 38995761 DOI: 10.3233/xst-240108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/14/2024]
Abstract
BACKGROUND Chest X-rays (CXR) are widely used to facilitate the diagnosis and treatment of critically ill and emergency patients in clinical practice. Accurate hemi-diaphragm detection based on postero-anterior (P-A) CXR images is crucial for the diaphragm function assessment of critically ill and emergency patients to provide precision healthcare for these vulnerable populations. OBJECTIVE Therefore, an effective and accurate hemi-diaphragm detection method for P-A CXR images is urgently developed to assess these vulnerable populations' diaphragm function. METHODS Based on the above, this paper proposes an effective hemi-diaphragm detection method for P-A CXR images based on the convolutional neural network (CNN) and graphics. First, we develop a robust and standard CNN model of pathological lungs trained by human P-A CXR images of normal and abnormal cases with multiple lung diseases to extract lung fields from P-A CXR images. Second, we propose a novel localization method of the cardiophrenic angle based on the two-dimensional projection morphology of the left and right lungs by graphics for detecting the hemi-diaphragm. RESULTS The mean errors of the four key hemi-diaphragm points in the lung field mask images abstracted from static P-A CXR images based on five different segmentation models are 9.05, 7.19, 7.92, 7.27, and 6.73 pixels, respectively. Besides, the results also show that the mean errors of these four key hemi-diaphragm points in the lung field mask images abstracted from dynamic P-A CXR images based on these segmentation models are 5.50, 7.07, 4.43, 4.74, and 6.24 pixels,respectively. CONCLUSION Our proposed hemi-diaphragm detection method can effectively perform hemi-diaphragm detection and may become an effective tool to assess these vulnerable populations' diaphragm function for precision healthcare.
Collapse
Affiliation(s)
- Yingjian Yang
- Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd, Shenzhen, Guangdong, China
| | - Jie Zheng
- Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd, Shenzhen, Guangdong, China
| | - Peng Guo
- Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd, Shenzhen, Guangdong, China
| | - Tianqi Wu
- Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd, Shenzhen, Guangdong, China
| | - Qi Gao
- Neusoft Medical System Co., Ltd., Shenyang, Liaoning, China
| | - Xueqiang Zeng
- School of Applied Technology, Shenzhen University, Shenzhen, China
| | - Ziran Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Nanrong Zeng
- School of Applied Technology, Shenzhen University, Shenzhen, China
| | - Zhanglei Ouyang
- Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd, Shenzhen, Guangdong, China
| | - Yingwei Guo
- School of Electrical and Information Engineering, Northeast Petroleum University, Daqing, China
| | - Huai Chen
- Department of Radiology, The Second Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| |
Collapse
|
11
|
Ghali R, Akhloufi MA. Vision Transformers for Lung Segmentation on CXR Images. SN COMPUTER SCIENCE 2023; 4:414. [PMID: 37252339 PMCID: PMC10206550 DOI: 10.1007/s42979-023-01848-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 04/17/2023] [Indexed: 05/31/2023]
Abstract
Accurate segmentation of the lungs in CXR images is the basis for an automated CXR image analysis system. It helps radiologists in detecting lung areas, subtle signs of disease and improving the diagnosis process for patients. However, precise semantic segmentation of lungs is considered a challenging case due to the presence of the edge rib cage, wide variation of lung shape, and lungs affected by diseases. In this paper, we address the problem of lung segmentation in healthy and unhealthy CXR images. Five models were developed and used in detecting and segmenting lung regions. Two loss functions and three benchmark datasets were employed to evaluate these models. Experimental results showed that the proposed models were able to extract salient global and local features from the input CXR images. The best performing model achieved an F1 score of 97.47%, outperforming recent published models. They proved their ability to separate lung regions from the rib cage and clavicle edges and segment varying lung shape depending on age and gender, as well as challenging cases of lungs affected by anomalies such as tuberculosis and the presence of nodules.
Collapse
Affiliation(s)
- Rafik Ghali
- Perception, Robotics, and Intelligent Machines (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9 Canada
| | - Moulay A. Akhloufi
- Perception, Robotics, and Intelligent Machines (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9 Canada
| |
Collapse
|
12
|
Chou HH, Lin JY, Shen GT, Huang CY. Validation of an Automated Cardiothoracic Ratio Calculation for Hemodialysis Patients. Diagnostics (Basel) 2023; 13:diagnostics13081376. [PMID: 37189477 DOI: 10.3390/diagnostics13081376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 04/05/2023] [Accepted: 04/07/2023] [Indexed: 05/17/2023] Open
Abstract
Cardiomegaly is associated with poor clinical outcomes and is assessed by routine monitoring of the cardiothoracic ratio (CTR) from chest X-rays (CXRs). Judgment of the margins of the heart and lungs is subjective and may vary between different operators. METHODS Patients aged > 19 years in our hemodialysis unit from March 2021 to October 2021 were enrolled. The borders of the lungs and heart on CXRs were labeled by two nephrologists as the ground truth (nephrologist-defined mask). We implemented AlbuNet-34, a U-Net variant, to predict the heart and lung margins from CXR images and to automatically calculate the CTRs. RESULTS The coefficient of determination (R2) obtained using the neural network model was 0.96, compared with an R2 of 0.90 obtained by nurse practitioners. The mean difference between the CTRs calculated by the nurse practitioners and senior nephrologists was 1.52 ± 1.46%, and that between the neural network model and the nephrologists was 0.83 ± 0.87% (p < 0.001). The mean CTR calculation duration was 85 s using the manual method and less than 2 s using the automated method (p < 0.001). CONCLUSIONS Our study confirmed the validity of automated CTR calculations. By achieving high accuracy and saving time, our model can be implemented in clinical practice.
Collapse
Affiliation(s)
- Hsin-Hsu Chou
- Department of Pediatrics, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600566, Taiwan
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung 413305, Taiwan
| | - Jin-Yi Lin
- Innovation and Incubation Center, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600566, Taiwan
| | - Guan-Ting Shen
- Innovation and Incubation Center, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600566, Taiwan
| | - Chih-Yuan Huang
- Division of Nephrology, Department of Internal Medicine, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600566, Taiwan
- Department of Sport Management, College of Recreation and Health Management, Chia Nan University of Pharmacy and Science, Tainan 717301, Taiwan
| |
Collapse
|
13
|
Ait Nasser A, Akhloufi MA. A Review of Recent Advances in Deep Learning Models for Chest Disease Detection Using Radiography. Diagnostics (Basel) 2023; 13:159. [PMID: 36611451 PMCID: PMC9818166 DOI: 10.3390/diagnostics13010159] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/21/2022] [Accepted: 12/26/2022] [Indexed: 01/05/2023] Open
Abstract
Chest X-ray radiography (CXR) is among the most frequently used medical imaging modalities. It has a preeminent value in the detection of multiple life-threatening diseases. Radiologists can visually inspect CXR images for the presence of diseases. Most thoracic diseases have very similar patterns, which makes diagnosis prone to human error and leads to misdiagnosis. Computer-aided detection (CAD) of lung diseases in CXR images is among the popular topics in medical imaging research. Machine learning (ML) and deep learning (DL) provided techniques to make this task more efficient and faster. Numerous experiments in the diagnosis of various diseases proved the potential of these techniques. In comparison to previous reviews our study describes in detail several publicly available CXR datasets for different diseases. It presents an overview of recent deep learning models using CXR images to detect chest diseases such as VGG, ResNet, DenseNet, Inception, EfficientNet, RetinaNet, and ensemble learning methods that combine multiple models. It summarizes the techniques used for CXR image preprocessing (enhancement, segmentation, bone suppression, and data-augmentation) to improve image quality and address data imbalance issues, as well as the use of DL models to speed-up the diagnosis process. This review also discusses the challenges present in the published literature and highlights the importance of interpretability and explainability to better understand the DL models' detections. In addition, it outlines a direction for researchers to help develop more effective models for early and automatic detection of chest diseases.
Collapse
Affiliation(s)
| | - Moulay A. Akhloufi
- Perception, Robotics and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1C 3E9, Canada
| |
Collapse
|
14
|
Prasitpuriprecha C, Jantama SS, Preeprem T, Pitakaso R, Srichok T, Khonjun S, Weerayuth N, Gonwirat S, Enkvetchakul P, Kaewta C, Nanthasamroeng N. Drug-Resistant Tuberculosis Treatment Recommendation, and Multi-Class Tuberculosis Detection and Classification Using Ensemble Deep Learning-Based System. Pharmaceuticals (Basel) 2022; 16:13. [PMID: 36678508 PMCID: PMC9864877 DOI: 10.3390/ph16010013] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 12/14/2022] [Accepted: 12/17/2022] [Indexed: 12/25/2022] Open
Abstract
This research develops the TB/non-TB detection and drug-resistant categorization diagnosis decision support system (TB-DRC-DSS). The model is capable of detecting both TB-negative and TB-positive samples, as well as classifying drug-resistant strains and also providing treatment recommendations. The model is developed using a deep learning ensemble model with the various CNN architectures. These architectures include EfficientNetB7, mobileNetV2, and Dense-Net121. The models are heterogeneously assembled to create an effective model for TB-DRC-DSS, utilizing effective image segmentation, augmentation, and decision fusion techniques to improve the classification efficacy of the current model. The web program serves as the platform for determining if a patient is positive or negative for tuberculosis and classifying several types of drug resistance. The constructed model is evaluated and compared to current methods described in the literature. The proposed model was assessed using two datasets of chest X-ray (CXR) images collected from the references. This collection of datasets includes the Portal dataset, the Montgomery County dataset, the Shenzhen dataset, and the Kaggle dataset. Seven thousand and eight images exist across all datasets. The dataset was divided into two subsets: the training dataset (80%) and the test dataset (20%). The computational result revealed that the classification accuracy of DS-TB against DR-TB has improved by an average of 43.3% compared to other methods. The categorization between DS-TB and MDR-TB, DS-TB and XDR-TB, and MDR-TB and XDR-TB was more accurate than with other methods by an average of 28.1%, 6.2%, and 9.4%, respectively. The accuracy of the embedded multiclass model in the web application is 92.6% when evaluated with the test dataset, but 92.8% when evaluated with a random subset selected from the aggregate dataset. In conclusion, 31 medical staff members have evaluated and utilized the online application, and the final user preference score for the web application is 9.52 out of a possible 10.
Collapse
Affiliation(s)
- Chutinun Prasitpuriprecha
- Department of Biopharmacy, Faculty of Pharmaceutical Sciences, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Sirima Suvarnakuta Jantama
- Department of Biopharmacy, Faculty of Pharmaceutical Sciences, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Thanawadee Preeprem
- Department of Biopharmacy, Faculty of Pharmaceutical Sciences, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Rapeepan Pitakaso
- Department of Industrial Engineering, Faculty of Engineering, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Thanatkij Srichok
- Department of Industrial Engineering, Faculty of Engineering, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Surajet Khonjun
- Department of Industrial Engineering, Faculty of Engineering, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Nantawatana Weerayuth
- Department of Mechanical Engineering, Faculty of Engineering, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Sarayut Gonwirat
- Department of Computer Engineering and Automation, Faculty of Engineering and Industrial Technology, Kalasin University, Kalasin 46000, Thailand
| | - Prem Enkvetchakul
- Department of Information Technology, Faculty of Science, Buriram University, Buriram 31000, Thailand
| | - Chutchai Kaewta
- Department of Computer Science, Faculty of Computer Science, Ubon Ratchathani Rajabhat University, Ubon Ratchathani 34000, Thailand
| | - Natthapong Nanthasamroeng
- Department of Engineering Technology, Faculty of Industrial Technology, Ubon Ratchathani Rajabhat University, Ubon Ratchathani 34000, Thailand
| |
Collapse
|
15
|
Chiu HY, Peng RHT, Lin YC, Wang TW, Yang YX, Chen YY, Wu MH, Shiao TH, Chao HS, Chen YM, Wu YT. Artificial Intelligence for Early Detection of Chest Nodules in X-ray Images. Biomedicines 2022; 10:2839. [PMID: 36359360 PMCID: PMC9687210 DOI: 10.3390/biomedicines10112839] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 11/02/2022] [Accepted: 11/04/2022] [Indexed: 09/06/2024] Open
Abstract
Early detection increases overall survival among patients with lung cancer. This study formulated a machine learning method that processes chest X-rays (CXRs) to detect lung cancer early. After we preprocessed our dataset using monochrome and brightness correction, we used different kinds of preprocessing methods to enhance image contrast and then used U-net to perform lung segmentation. We used 559 CXRs with a single lung nodule labeled by experts to train a You Only Look Once version 4 (YOLOv4) deep-learning architecture to detect lung nodules. In a testing dataset of 100 CXRs from patients at Taipei Veterans General Hospital and 154 CXRs from the Japanese Society of Radiological Technology dataset, the sensitivity of the AI model using a combination of different preprocessing methods performed the best at 79%, with 3.04 false positives per image. We then tested the AI by using 383 sets of CXRs obtained in the past 5 years prior to lung cancer diagnoses. The median time from detection to diagnosis for radiologists assisted with AI was 46 (3-523) days, longer than that for radiologists (8 (0-263) days). The AI model can assist radiologists in the early detection of lung nodules.
Collapse
Affiliation(s)
- Hwa-Yen Chiu
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Division of Internal Medicine, Hsinchu Branch, Taipei Veterans General Hospital, Hsinchu 310, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Rita Huan-Ting Peng
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yi-Chian Lin
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Ting-Wei Wang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Ya-Xuan Yang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Ying-Ying Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Department of Critical Care Medicine, Taiwan Adventist Hospital, Taipei 105, Taiwan
| | - Mei-Han Wu
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Department of Medical Imaging, Cheng Hsin General Hospital, Taipei 112, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei 112, Taiwan
| | - Tsu-Hui Shiao
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Heng-Sheng Chao
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yuh-Min Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei 112, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| |
Collapse
|
16
|
Liu W, Yu L, Luo J. A hybrid attention-enhanced DenseNet neural network model based on improved U-Net for rice leaf disease identification. FRONTIERS IN PLANT SCIENCE 2022; 13:922809. [PMID: 36330248 PMCID: PMC9623092 DOI: 10.3389/fpls.2022.922809] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
Rice is a necessity for billions of people in the world, and rice disease control has been a major focus of research in the agricultural field. In this study, a new attention-enhanced DenseNet neural network model is proposed, which includes a lesion feature extractor by region of interest (ROI) extraction algorithm and a DenseNet classification model for accurate recognition of lesion feature extraction maps. It was found that the ROI extraction algorithm can highlight the lesion area of rice leaves, which makes the neural network classification model pay more attention to the lesion area. Compared with a single rice disease classification model, the classification model combined with the ROI extraction algorithm can improve the recognition accuracy of rice leaf disease identification, and the proposed model can achieve an accuracy of 96% for rice leaf disease identification.
Collapse
Affiliation(s)
- Wufeng Liu
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, China
- College of Electrical Engineering, Henan University of Technology, Zhengzhou, China
| | - Liang Yu
- College of Electrical Engineering, Henan University of Technology, Zhengzhou, China
| | - Jiaxin Luo
- College of Electrical Engineering, Henan University of Technology, Zhengzhou, China
| |
Collapse
|