1
|
Ding H, Chen X, Wang H, Zhang L, Wang F, He L. Identifying immunodeficiency status in children with pulmonary tuberculosis: using radiomics approach based on un-enhanced chest computed tomography. Transl Pediatr 2023; 12:2191-2202. [PMID: 38197102 PMCID: PMC10772833 DOI: 10.21037/tp-23-309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 11/02/2023] [Indexed: 01/11/2024] Open
Abstract
Background Children with primary immunodeficiency diseases (PIDs) are particularly vulnerable to infection of Mycobacterium tuberculosis (Mtb). Chest computed tomography (CT) is an important examination diagnosing pulmonary tuberculosis (PTB), and there are some differences between primary immunocompromised and immunocompetent cases with PTB. Therefore, this study aimed to use the radiomics analysis based on un-enhanced CT for identifying immunodeficiency status in children with PTB. Methods This retrospective study enrolled a total of 173 patients with diagnosis of PTB and available immunodeficiency status. Based on their immunodeficiency status, the patients were divided into PIDs (n=72) and no-PIDs (n=101). The samplings were randomly divided into training and testing groups according to a ratio of 3:1. Regions of interest were obtained by segmenting lung lesions on un-enhanced CT images to extract radiomics features. The optimal radiomics features were identified after dimensionality reduction in the training group, and a logistic regression algorithm was used to establish radiomics model. The model was validated in the training and testing groups. Diagnostic efficiency of the model was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, precision, accuracy, F1 score, calibration curve, and decision curve. Results The radiomics model was constructed using nine optimal features. In the training set, the model achieved an AUC of 0.837, sensitivity of 0.783, specificity of 0.780, and F1 score of 0.749. The cross-validation of the model in the training set showed an AUC of 0.774, sensitivity of 0.834, specificity of 0.720, and F1 score of 0.749. In the test set, the model achieved an AUC of 0.746, sensitivity of 0.722, specificity of 0.692, and F1 score of 0.823. Calibration curves indicated a strong predictive performance by the model, and decision curve analysis demonstrated its clinical utility. Conclusions The CT-based radiomics model demonstrates good discriminative efficacy in identifying the presence of PIDs in children with PTB, and shows promise in accurately identifying the immunodeficiency status in this population.
Collapse
Affiliation(s)
- Hao Ding
- Department of Radiology, Children’s Hospital of Chongqing Medical University, Chongqing, China
- National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| | - Xin Chen
- Department of Radiology, Children’s Hospital of Chongqing Medical University, Chongqing, China
- National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| | - Haoru Wang
- Department of Radiology, Children’s Hospital of Chongqing Medical University, Chongqing, China
- National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| | - Li Zhang
- Department of Radiology, Children’s Hospital of Chongqing Medical University, Chongqing, China
- National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| | - Fang Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ling He
- Department of Radiology, Children’s Hospital of Chongqing Medical University, Chongqing, China
- National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| |
Collapse
|
2
|
Wollek A, Hyska S, Sabel B, Ingrisch M, Lasser T. WindowNet: Learnable Windows for Chest X-ray Classification. J Imaging 2023; 9:270. [PMID: 38132688 PMCID: PMC10743662 DOI: 10.3390/jimaging9120270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 11/20/2023] [Accepted: 12/04/2023] [Indexed: 12/23/2023] Open
Abstract
Public chest X-ray (CXR) data sets are commonly compressed to a lower bit depth to reduce their size, potentially hiding subtle diagnostic features. In contrast, radiologists apply a windowing operation to the uncompressed image to enhance such subtle features. While it has been shown that windowing improves classification performance on computed tomography (CT) images, the impact of such an operation on CXR classification performance remains unclear. In this study, we show that windowing strongly improves the CXR classification performance of machine learning models and propose WindowNet, a model that learns multiple optimal window settings. Our model achieved an average AUC score of 0.812 compared with the 0.759 score of a commonly used architecture without windowing capabilities on the MIMIC data set.
Collapse
Affiliation(s)
- Alessandro Wollek
- Munich Institute of Biomedical Engineering, TUM School of Computation, Information, and Technology, Technical University of Munich, 80333 Munich, Germany;
| | - Sardi Hyska
- Department of Radiology, University Hospital Ludwig-Maximilians-University, 81377 Munich, Germany; (S.H.); (M.I.)
| | - Bastian Sabel
- Department of Radiology, University Hospital Ludwig-Maximilians-University, 81377 Munich, Germany; (S.H.); (M.I.)
| | - Michael Ingrisch
- Department of Radiology, University Hospital Ludwig-Maximilians-University, 81377 Munich, Germany; (S.H.); (M.I.)
| | - Tobias Lasser
- Munich Institute of Biomedical Engineering, TUM School of Computation, Information, and Technology, Technical University of Munich, 80333 Munich, Germany;
| |
Collapse
|
3
|
Duh MM, Torra-Ferrer N, Riera-Marín M, Cumelles D, Rodríguez-Comas J, García López J, Fernández Planas MT. Deep Learning to Detect Pancreatic Cystic Lesions on Abdominal Computed Tomography Scans: Development and Validation Study. JMIR AI 2023; 2:e40702. [PMID: 38875547 PMCID: PMC11041052 DOI: 10.2196/40702] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 09/02/2022] [Accepted: 11/11/2022] [Indexed: 06/16/2024]
Abstract
BACKGROUND Pancreatic cystic lesions (PCLs) are frequent and underreported incidental findings on computed tomography (CT) scans and can evolve to pancreatic cancer-the most lethal cancer, with less than 5 months of life expectancy. OBJECTIVE The aim of this study was to develop and validate an artificial deep neural network (attention gate U-Net, also named "AGNet") for automated detection of PCLs. This kind of technology can help radiologists to cope with an increasing demand of cross-sectional imaging tests and increase the number of PCLs incidentally detected, thus increasing the early detection of pancreatic cancer. METHODS We adapted and evaluated an algorithm based on an attention gate U-Net architecture for automated detection of PCL on CTs. A total of 335 abdominal CTs with PCLs and control cases were manually segmented in 3D by 2 radiologists with over 10 years of experience in consensus with a board-certified radiologist specialized in abdominal radiology. This information was used to train a neural network for segmentation followed by a postprocessing pipeline that filtered the results of the network and applied some physical constraints, such as the expected position of the pancreas, to minimize the number of false positives. RESULTS Of 335 studies included in this study, 297 had a PCL, including serous cystadenoma, intraductal pseudopapillary mucinous neoplasia, mucinous cystic neoplasm, and pseudocysts . The Shannon Index of the chosen data set was 0.991 with an evenness of 0.902. The mean sensitivity obtained in the detection of these lesions was 93.1% (SD 0.1%), and the specificity was 81.8% (SD 0.1%). CONCLUSIONS This study shows a good performance of an automated artificial deep neural network in the detection of PCL on both noncontrast- and contrast-enhanced abdominal CT scans.
Collapse
Affiliation(s)
- Maria Montserrat Duh
- Department of Radiology, Consorci Sanitari del Maresme (Hospital de Mataró), Mataró, Spain
| | - Neus Torra-Ferrer
- Department of Radiology, Consorci Sanitari del Maresme (Hospital de Mataró), Mataró, Spain
| | | | - Dídac Cumelles
- Scientific and Technical Department, Sycai Technologies SL, Barcelona, Spain
| | | | - Javier García López
- Scientific and Technical Department, Sycai Technologies SL, Barcelona, Spain
| | | |
Collapse
|
4
|
Automated placental abruption identification using semantic segmentation, quantitative features, SVM, ensemble and multi-path CNN. Heliyon 2023; 9:e13577. [PMID: 36852023 PMCID: PMC9957707 DOI: 10.1016/j.heliyon.2023.e13577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 01/31/2023] [Accepted: 02/02/2023] [Indexed: 02/12/2023] Open
Abstract
The placenta is a fundamental organ throughout the pregnancy and the fetus' health is closely related to its proper function. Because of the importance of the placenta, any suspicious placental conditions require ultrasound image investigation. We propose an automated method for processing fetal ultrasonography images to identify placental abruption using machine learning methods in this paper. The placental imaging characteristics are used as the semantic identifiers of the region of the placenta compared with the amniotic fluid and hard organs. The quantitative feature extraction is applied to the automatically identified placental regions to assign a vector of optical features to each ultrasonographic image. In the first classification step, two methods of kernel-based Support Vector Machine (SVM) and decision tree Ensemble classifier are elaborated and compared for identification of the abruption cases and controls. The Recursive Feature Elimination (RFE) is applied for optimizing the feature vector elements for the best performance of each classifier. In the second step, the deep learning classifiers of multi-path ResNet-50 and Inception-V3 are used in combination with RFE. The resulting performances of the algorithms are compared together to reveal the best classification method for the identification of the abruption status. The best results were achieved for optimized ResNet-50 with an accuracy of 82.88% ± SD 1.42% in the identification of placental abruption on the testing dataset. These results show it is possible to construct an automated analysis method with affordable performance for the detection of placental abruption based on ultrasound images.
Collapse
|
5
|
Quantitative Radiomic Features From Computed Tomography Can Predict Pancreatic Cancer up to 36 Months Before Diagnosis. Clin Transl Gastroenterol 2022; 14:e00548. [PMID: 36434803 PMCID: PMC9875961 DOI: 10.14309/ctg.0000000000000548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 10/18/2022] [Indexed: 11/27/2022] Open
Abstract
INTRODUCTION Pancreatic cancer is the third leading cause of cancer deaths among men and women in the United States. We aimed to detect early changes on computed tomography (CT) images associated with pancreatic ductal adenocarcinoma (PDAC) based on quantitative imaging features (QIFs) for patients with and without chronic pancreatitis (CP). METHODS Adults 18 years and older diagnosed with PDAC in 2008-2018 were identified. Their CT scans 3 months-3 years before the diagnosis date were matched to up to 2 scans of controls. The pancreas was automatically segmented using a previously developed algorithm. One hundred eleven QIFs were extracted. The data set was randomly split for training/validation. Neighborhood and principal component analyses were applied to select the most important features. A conditional support vector machine was used to develop prediction algorithms separately for patients with and without CP. The computer labels were compared with manually reviewed CT images 2-3 years before the index date in 19 cases and 19 controls. RESULTS Two hundred twenty-seven of 554 scans of non-CP cancer cases/controls and 70 of 140 scans of CP cancer cases/controls were included (average age 71 and 68 years, 51% and 44% females for non-CP patients and patients with CP, respectively). The QIF-based algorithms varied based on CP status. For non-CP patients, accuracy measures were 94%-95% and area under the curve (AUC) measures were 0.98-0.99. Sensitivity, specificity, positive predictive value, and negative predictive value were in the ranges of 88%-91%, 96%-98%, 91%-95%, and 94%-96%, respectively. QIFs on CT examinations within 2-3 years before the index date also had very high predictive accuracy (accuracy 95%-98%; AUC 0.99-1.00). The QIF-based algorithm outperformed manual rereview of images for determination of PDAC risk. For patients with CP, the algorithms predicted PDAC perfectly (accuracy 100% and AUC 1.00). DISCUSSION QIFs can accurately predict PDAC for both non-CP patients and patients with CP on CT imaging and represent promising biomarkers for early detection of pancreatic cancer.
Collapse
|
6
|
Kim KD, Cho K, Kim M, Lee KH, Lee S, Lee SM, Lee KH, Kim N. Enhancing deep learning based classifiers with inpainting anatomical side markers (L/R markers) for multi-center trials. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106705. [PMID: 35462346 DOI: 10.1016/j.cmpb.2022.106705] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 02/14/2022] [Accepted: 02/20/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE The protocol for placing anatomical side markers (L/R markers) in chest radiographs varies from one hospital or department to another. However, the markers have strong signals that can be useful for deep learning-based classifier to predict diseases. We aimed to enhance the performance of a deep learning-based classifiers in multi-center datasets by inpainting the L/R markers. METHODS The L/R marker was detected with using the EfficientDet detection network; only the detected regions were inpainted using a generative adversarial network (GAN). To analyze the effect of the inpainting in detail, deep learning-based classifiers were trained using original images, marker-inpainted images, and original images clipped using the min-max value of the marker-inpainted images. Binary classification, multi-class classification, and multi-task learning with segmentation and classification were developed and evaluated. Furthermore, the performances of the network on internal and external validation datasets were compared using DeLong's test for two correlated receiver operating characteristic (ROC) curves in binary classification and Stuart-Maxwell test for marginal homogeneity in multi-class classification and multi-task learning. In addition, the qualitative results of activation maps were evaluated using the gradient-class activation map (Grad-CAM). RESULTS Marker-inpainting preprocessing improved the classification performances. In the binary classification based on the internal validation, the area under the curves (AUCs) and accuracies were 0.950 and 0.900 for the model trained on the min-max clipped images and 0.911 and 0.850 for the model trained on the original images, respectively (P-value=0.006). In the external validation, the AUCs and accuracies were 0.858 and 0.677 for the model trained using the inpainted images and 0.723 and 0.568 for the model trained using the original images (P-value<0.001), respectively. In addition, the models trained using the marker inpainted images showed the best performance in multi-class classification and multi-task learning. Furthermore, the activation maps obtained using the Grad-CAM improved with the proposed method. The 5-fold validation results also showed improvement trend according to the preprocessing strategies. CONCLUSIONS Inpainting an L/R marker significantly enhanced the classifier's performance and robustness, especially in internal and external studies, which could be useful in developing a more robust and accurate deep learning-based classifier for multi-center trials. The code for detection is available at: https://github.com/mi2rl/MI2RLNet. And the code for inpainting is available at: https://github.com/mi2rl/L-R-marker-inpainting.
Collapse
Affiliation(s)
- Ki Duk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Republic of Korea
| | - Kyungjin Cho
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Mingyu Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Republic of Korea
| | - Kyung Hwa Lee
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Seungjun Lee
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Republic of Korea
| | - Kyung Hee Lee
- Department of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam 13620, Republic of Korea
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Republic of Korea; Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Republic of Korea.
| |
Collapse
|
7
|
Tang Y, Gao R, Lee HH, Chen Y, Gao D, Bermudez C, Bao S, Huo Y, Savoie BV, Landman BA. Phase identification for dynamic CT enhancements with generative adversarial network. Med Phys 2021; 48:1276-1285. [PMID: 33410167 DOI: 10.1002/mp.14706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 12/02/2020] [Accepted: 12/18/2020] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Dynamic contrast-enhanced computed tomography (CT) is widely used to provide dynamic tissue contrast for diagnostic investigation and vascular identification. However, the phase information of contrast injection is typically recorded manually by technicians, which introduces missing or mislabeling. Hence, imaging-based contrast phase identification is appealing, but challenging, due to large variations among different contrast protocols, vascular dynamics, and metabolism, especially for clinically acquired CT scans. The purpose of this study is to perform imaging-based phase identification for dynamic abdominal CT using a proposed adversarial learning framework across five representative contrast phases. METHODS A generative adversarial network (GAN) is proposed as a disentangled representation learning model. To explicitly model different contrast phases, a low dimensional common representation and a class specific code are fused in the hidden layer. Then, the low dimensional features are reconstructed following a discriminator and classifier. 36 350 slices of CT scans from 400 subjects are used to evaluate the proposed method with fivefold cross-validation with splits on subjects. Then, 2216 slices images from 20 independent subjects are employed as independent testing data, which are evaluated using multiclass normalized confusion matrix. RESULTS The proposed network significantly improved correspondence (0.93) over VGG, ResNet50, StarGAN, and 3DSE with accuracy scores 0.59, 0.62, 0.72, and 0.90, respectively (P < 0.001 Stuart-Maxwell test for normalized multiclass confusion matrix). CONCLUSION We show that adversarial learning for discriminator can be benefit for capturing contrast information among phases. The proposed discriminator from the disentangled network achieves promising results.
Collapse
Affiliation(s)
- Yucheng Tang
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA
| | - Riqiang Gao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA
| | - Ho Hin Lee
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA
| | | | - Dashan Gao
- 12 Sigma Technologies, San Diego, CA, 92130, USA
| | - Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, 37235, USA
| | - Shunxing Bao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA
| | - Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA
| | - Brent V Savoie
- Vanderbilt University Medical Center, Nashville, TN, 37235, USA
| | - Bennett A Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA.,Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, 37235, USA.,Vanderbilt University Medical Center, Nashville, TN, 37235, USA
| |
Collapse
|
8
|
Tang Y, Gao R, Lee HH, Wells QS, Spann A, Terry JG, Carr JJ, Huo Y, Bao S, Landman BA. Prediction of Type II Diabetes Onset with Computed Tomography and Electronic Medical Records. MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT AND CLINICAL IMAGE-BASED PROCEDURES : 10TH INTERNATIONAL WORKSHOP, ML-CDS 2020, AND 9TH INTERNATIONAL WORKSHOP, CLIP 2020, HELD IN CONJUNCTION WITH MICCAI 2020, LIMA, PERU, OCTOBER 4-8, ... 2020; 12445:13-23. [PMID: 34113927 PMCID: PMC8188902 DOI: 10.1007/978-3-030-60946-7_2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Type II diabetes mellitus (T2DM) is a significant public health concern with multiple known risk factors (e.g., body mass index (BMI), body fat distribution, glucose levels). Improved prediction or prognosis would enable earlier intervention before possibly irreversible damage has occurred. Meanwhile, abdominal computed tomography (CT) is a relatively common imaging technique. Herein, we explore secondary use of the CT imaging data to refine the risk profile of future diagnosis of T2DM. In this work, we delineate quantitative information and imaging slices of patient history to predict onset T2DM retrieved from ICD-9 codes at least one year in the future. Furthermore, we investigate the role of five different types of electronic medical records (EMR), specifically 1) demographics; 2) pancreas volume; 3) visceral/subcutaneous fat volumes in L2 region of interest; 4) abdominal body fat distribution and 5) glucose lab tests in prediction. Next, we build a deep neural network to predict onset T2DM with pancreas imaging slices. Finally, motivated by multi-modal machine learning, we construct a merged framework to combine CT imaging slices with EMR information to refine the prediction. We empirically demonstrate our proposed joint analysis involving images and EMR leads to 4.25% and 6.93% AUC increase in predicting T2DM compared with only using images or EMR. In this study, we used case-control dataset of 997 subjects with CT scans and contextual EMR scores. To the best of our knowledge, this is the first work to show the ability to prognose T2DM using the patients' contextual and imaging history. We believe this study has promising potential for heterogeneous data analysis and multi-modal medical applications.
Collapse
Affiliation(s)
| | | | | | | | - Ashley Spann
- Vanderbilt University Medical Center, , Nashville, USA
| | - James G Terry
- Vanderbilt University Medical Center, , Nashville, USA
| | - John J Carr
- Vanderbilt University Medical Center, , Nashville, USA
| | | | | | - Bennett A Landman
- Vanderbilt University, , Nashville, USA
- Vanderbilt University Medical Center, , Nashville, USA
| |
Collapse
|