1
|
Kalra N, Verma P, Verma S. Advancements in AI based healthcare techniques with FOCUS ON diagnostic techniques. Comput Biol Med 2024; 179:108917. [PMID: 39059212 DOI: 10.1016/j.compbiomed.2024.108917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 07/15/2024] [Accepted: 07/15/2024] [Indexed: 07/28/2024]
Abstract
Since the past decade, the interest towards more precise and efficient healthcare techniques with special emphasis on diagnostic techniques has increased. Artificial Intelligence has proved to be instrumental in development of various such techniques. The various types of AI like ML, NLP, RPA etc. are being used, which have streamlined and organised the Electronic Health Records (EHR) along with aiding the healthcare provider with decision making and sample and data analysis. This article also deals with the 3 major categories of diagnostic techniques - Imaging based, Pathology based and Preventive diagnostic techniques and what all changes and modifications were brought upon them, due to use of AI. Due to such a high demand, the investment in AI based healthcare techniques has increased substantially, with predicted market size of almost 188 billon USD by 2030. In India itself, AI in healthcare is expected to raise the GDP by 25 billion USD by 2028. But there are also several challenges associated with this like unavailability of quality data, black box issue etc. One of the major challenges is the ethical considerations and issues during use of medical records as it is a very sensitive document. Due to this, there is several trust issues associated with adoption of AI by many organizations. These challenges have also been discussed in this article. Need for further development in the AI based diagnostic techniques is also done in the article. Alongside, the production of such techniques and devices which are easy to use and simple to incorporate into the daily workflows have immense scope in the upcoming times. The increasing scope of Clinical Decision Support System, Telemedicine etc. make AI a promising field in the healthcare and diagnostics arena. Concluding the article, it can be said that despite the presence of various challenges to the implementation and usage, the future prospects for AI in healthcare is immense and work needs to be done in order to ensure the availability of resources for same so that high level of accuracy can be achieved and better health outcomes can be provided to patients. Ethical concerns need to be addressed for smooth implementation and to reduce the burden of the developers, which has been discussed in this narrative review article.
Collapse
Affiliation(s)
- Nishita Kalra
- Department of Pharmaceutical Chemistry/Analysis, Delhi Pharmaceutical Sciences & Research University, Pushp Vihar, Sector 3, New Delhi, 110017, India
| | - Prachi Verma
- Department of Pharmaceutical Chemistry/Analysis, Delhi Pharmaceutical Sciences & Research University, Pushp Vihar, Sector 3, New Delhi, 110017, India
| | - Surajpal Verma
- Department of Pharmaceutical Chemistry/Analysis, Delhi Pharmaceutical Sciences & Research University, Pushp Vihar, Sector 3, New Delhi, 110017, India.
| |
Collapse
|
2
|
Paverd H, Zormpas-Petridis K, Clayton H, Burge S, Crispin-Ortuzar M. Radiology and multi-scale data integration for precision oncology. NPJ Precis Oncol 2024; 8:158. [PMID: 39060351 PMCID: PMC11282284 DOI: 10.1038/s41698-024-00656-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 07/15/2024] [Indexed: 07/28/2024] Open
Abstract
In this Perspective paper we explore the potential of integrating radiological imaging with other data types, a critical yet underdeveloped area in comparison to the fusion of other multi-omic data. Radiological images provide a comprehensive, three-dimensional view of cancer, capturing features that would be missed by biopsies or other data modalities. This paper explores the complexities and challenges of incorporating medical imaging into data integration models, in the context of precision oncology. We present the different categories of imaging-omics integration and discuss recent progress, highlighting the opportunities that arise from bringing together spatial data on different scales.
Collapse
Affiliation(s)
- Hania Paverd
- Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
- Department of Oncology, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | | | - Hannah Clayton
- Department of Oncology, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Sarah Burge
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Mireia Crispin-Ortuzar
- Department of Oncology, University of Cambridge, Cambridge, UK.
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK.
| |
Collapse
|
3
|
Deng C, Hu J, Tang P, Xu T, He L, Zeng Z, Sheng J. Application of CT and MRI images based on artificial intelligence to predict lymph node metastases in patients with oral squamous cell carcinoma: a subgroup meta-analysis. Front Oncol 2024; 14:1395159. [PMID: 38957322 PMCID: PMC11217320 DOI: 10.3389/fonc.2024.1395159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 05/30/2024] [Indexed: 07/04/2024] Open
Abstract
Background The performance of artificial intelligence (AI) in the prediction of lymph node (LN) metastasis in patients with oral squamous cell carcinoma (OSCC) has not been quantitatively evaluated. The purpose of this study was to conduct a systematic review and meta-analysis of published data on the diagnostic performance of CT and MRI based on AI algorithms for predicting LN metastases in patients with OSCC. Methods We searched the Embase, PubMed (Medline), Web of Science, and Cochrane databases for studies on the use of AI in predicting LN metastasis in OSCC. Binary diagnostic accuracy data were extracted to obtain the outcomes of interest, namely, the area under the curve (AUC), sensitivity, and specificity, and compared the diagnostic performance of AI with that of radiologists. Subgroup analyses were performed with regard to different types of AI algorithms and imaging modalities. Results Fourteen eligible studies were included in the meta-analysis. The AUC, sensitivity, and specificity of the AI models for the diagnosis of LN metastases were 0.92 (95% CI 0.89-0.94), 0.79 (95% CI 0.72-0.85), and 0.90 (95% CI 0.86-0.93), respectively. Promising diagnostic performance was observed in the subgroup analyses based on algorithm types [machine learning (ML) or deep learning (DL)] and imaging modalities (CT vs. MRI). The pooled diagnostic performance of AI was significantly better than that of experienced radiologists. Discussion In conclusion, AI based on CT and MRI imaging has good diagnostic accuracy in predicting LN metastasis in patients with OSCC and thus has the potential for clinical application. Systematic Review Registration https://www.crd.york.ac.uk/PROSPERO/#recordDetails, PROSPERO (No. CRD42024506159).
Collapse
Affiliation(s)
| | | | | | | | | | | | - Jianfeng Sheng
- Department of Thyroid, Head, Neck and Maxillofacial Surgery, the Third Hospital of Mianyang & Sichuan Mental Health Center, Mianyang, Sichuan, China
| |
Collapse
|
4
|
Huma C, Hawon L, Sarisha J, Erdal T, Kevin C, Valentina KA. Advances in the field of developing biomarkers for re-irradiation: a how-to guide to small, powerful data sets and artificial intelligence. EXPERT REVIEW OF PRECISION MEDICINE AND DRUG DEVELOPMENT 2024; 9:3-16. [PMID: 38550554 PMCID: PMC10972602 DOI: 10.1080/23808993.2024.2325936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 02/28/2024] [Indexed: 04/01/2024]
Abstract
Introduction Patient selection remains challenging as the clinical use of re-irradiation (re-RT) increases. Re-RT data is limited to retrospective studies and small prospective single-institution reports, resulting in small, heterogenous data sets. Validated prognostic and predictive biomarkers are derived from large-volume studies with long-term follow-up. This review aims to examine existing re-RT publications and available data sets and discuss strategies using artificial intelligence (AI) to approach small data sets to optimize the use of re-RT data. Methods Re-RT publications were identified where associated public data was present. The existing literature on small data sets to identify biomarkers was also explored. Results Publications with associated public data were identified, with glioma and nasopharyngeal cancers emerging as the most common tumor sites where the use of re-RT was the primary management approach. Existing and emerging AI strategies have been used to approach small data sets including data generation, augmentation, discovery, and transfer learning. Conclusions Further data is needed to generate adaptive frameworks, improve the collection of specimens for molecular analysis, and improve the interpretability of results in re-RT data.
Collapse
Affiliation(s)
- Chaudhry Huma
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Building 10, Bethesda, MD, 20892, United States
| | - Lee Hawon
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Building 10, Bethesda, MD, 20892, United States
| | - Jagasia Sarisha
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Building 10, Bethesda, MD, 20892, United States
| | - Tasci Erdal
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Building 10, Bethesda, MD, 20892, United States
| | - Camphausen Kevin
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Building 10, Bethesda, MD, 20892, United States
| | - Krauze Andra Valentina
- Radiation Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Building 10, Bethesda, MD, 20892, United States
| |
Collapse
|
5
|
Jeong S, Yu H, Park SH, Woo D, Lee SJ, Chong GO, Han HS, Kim JC. Comparing deep learning and handcrafted radiomics to predict chemoradiotherapy response for locally advanced cervical cancer using pretreatment MRI. Sci Rep 2024; 14:1180. [PMID: 38216687 PMCID: PMC10786874 DOI: 10.1038/s41598-024-51742-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 01/09/2024] [Indexed: 01/14/2024] Open
Abstract
Concurrent chemoradiotherapy (CRT) is the standard treatment for locally advanced cervical cancer (LACC), but its responsiveness varies among patients. A reliable tool for predicting CRT responses is necessary for personalized cancer treatment. In this study, we constructed prediction models using handcrafted radiomics (HCR) and deep learning radiomics (DLR) based on pretreatment MRI data to predict CRT response in LACC. Furthermore, we investigated the potential improvement in prediction performance by incorporating clinical factors. A total of 252 LACC patients undergoing curative chemoradiotherapy are included. The patients are randomly divided into two independent groups for the training (167 patients) and test datasets (85 patients). Contrast-enhanced T1- and T2-weighted MR scans are obtained. For HCR analysis, 1890 imaging features are extracted and a support vector machine classifier with a five-fold cross-validation is trained on training dataset to predict CRT response and subsequently validated on test dataset. For DLR analysis, a 3-dimensional convolutional neural network was trained on training dataset and validated on test dataset. In conclusion, both HCR and DLR models could predict CRT responses in patients with LACC. The integration of clinical factors into radiomics prediction models tended to improve performance in HCR analysis. Our findings may contribute to the development of personalized treatment strategies for LACC patients.
Collapse
Affiliation(s)
- Sungmoon Jeong
- Department of Medical Informatics, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
- Research Center for Artificial Intelligence in Medicine, Kyungpook National University Hospital, Daegu, Republic of Korea
| | - Hosang Yu
- Research Center for Artificial Intelligence in Medicine, Kyungpook National University Hospital, Daegu, Republic of Korea
| | - Shin-Hyung Park
- Department of Radiation Oncology, School of Medicine, Kyungpook National University, Daegu, Republic of Korea.
- Department of Radiation Oncology, Kyungpook National University Hospital, Daegu, Republic of Korea.
- Cardiovascular Research Institute, School of Medicine, Kyungpook National University, Daegu, Republic of Korea.
| | - Dongwon Woo
- Research Center for Artificial Intelligence in Medicine, Kyungpook National University Hospital, Daegu, Republic of Korea
| | - Seoung-Jun Lee
- Department of Radiation Oncology, Kyungpook National University Hospital, Daegu, Republic of Korea
| | - Gun Oh Chong
- Department of Gynecology, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
- Clinical Omics Research Center, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
| | - Hyung Soo Han
- Clinical Omics Research Center, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
- Department of Physiology, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
| | - Jae-Chul Kim
- Department of Radiation Oncology, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
- Department of Radiation Oncology, Kyungpook National University Hospital, Daegu, Republic of Korea
| |
Collapse
|
6
|
Choi JH, Lee J, Lee SH, Lee S, Moon AS, Cho SH, Kim JS, Cho IR, Paik WH, Ryu JK, Kim YT. Analysis of ultrasonographic images using a deep learning-based model as ancillary diagnostic tool for diagnosing gallbladder polyps. Dig Liver Dis 2023; 55:1705-1711. [PMID: 37407319 DOI: 10.1016/j.dld.2023.06.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Revised: 06/05/2023] [Accepted: 06/19/2023] [Indexed: 07/07/2023]
Abstract
BACKGROUND Accurately diagnosing gallbladder polyps (GBPs) is important to avoid misdiagnosis and overtreatment. AIMS To evaluate the efficacy of a deep learning model and the accuracy of a computer-aided diagnosis by physicians for diagnosing GBPs. METHODS This retrospective cohort study was conducted from January 2006 to September 2021, and 3,754 images from 263 patients were analyzed. The outcome of this study was the efficacy of the developed deep learning model in discriminating neoplastic GBPs (NGBPs) from non-NGBPs and to evaluate the accuracy of a computer-aided diagnosis with that made by physicians. RESULTS The efficacy of discriminating NGBPs from non- NGBPs using deep learning was 0.944 (accuracy, 0.858; sensitivity, 0.856; specificity, 0.861). The accuracy of an unassisted diagnosis of GBP was 0.634, and that of a computer-aided diagnosis was 0.785 (p<0.001). There were no significant differences in the accuracy of a computer-aided diagnosis between experienced (0.835) and inexperienced (0.772) physicians (p = 0.251). A computer-aided diagnosis significantly assisted inexperienced physicians (0.772 vs. 0.614; p < 0.001) but not experienced physicians. CONCLUSIONS Deep learning-based models discriminate NGBPs from non- NGBPs with excellent accuracy. As ancillary diagnostic tools, they may assist inexperienced physicians in improving their diagnostic accuracy.
Collapse
Affiliation(s)
- Jin Ho Choi
- Division of Gastroenterology, Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Jaesung Lee
- Department of Artificial Intelligence, Chung-Ang University, 221, Heukseok-Dong, Dongjak-Gu, Seoul, Korea
| | - Sang Hyub Lee
- Department of Internal Medicine and Liver Research Institute, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea.
| | - Sanghyuk Lee
- Department of Artificial Intelligence, Chung-Ang University, 221, Heukseok-Dong, Dongjak-Gu, Seoul, Korea
| | - A-Seong Moon
- Department of Artificial Intelligence, Chung-Ang University, 221, Heukseok-Dong, Dongjak-Gu, Seoul, Korea
| | - Sung-Hyun Cho
- Department of Artificial Intelligence, Chung-Ang University, 221, Heukseok-Dong, Dongjak-Gu, Seoul, Korea
| | - Joo Seong Kim
- Department of Internal Medicine, Dongguk University College of Medicine, Dongguk University Ilsan Hospital, Goyang, Korea
| | - In Rae Cho
- Department of Internal Medicine and Liver Research Institute, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Woo Hyun Paik
- Department of Internal Medicine and Liver Research Institute, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Ji Kon Ryu
- Department of Internal Medicine and Liver Research Institute, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Yong-Tae Kim
- Department of Internal Medicine and Liver Research Institute, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| |
Collapse
|
7
|
Theis M, Block W, Luetkens JA, Attenberger UI, Nowak S, Sprinkart AM. Direct deep learning-based survival prediction from pre-interventional CT prior to transcatheter aortic valve replacement. Eur J Radiol 2023; 168:111150. [PMID: 37844428 DOI: 10.1016/j.ejrad.2023.111150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 09/27/2023] [Accepted: 10/10/2023] [Indexed: 10/18/2023]
Abstract
PURPOSE To investigate survival prediction in patients undergoing transcatheter aortic valve replacement (TAVR) using deep learning (DL) methods applied directly to pre-interventional CT images and to compare performance with survival models based on scalar markers of body composition. METHOD This retrospective single-center study included 760 patients undergoing TAVR (mean age 81 ± 6 years; 389 female). As a baseline, a Cox proportional hazards model (CPHM) was trained to predict survival on sex, age, and the CT body composition markers fatty muscle fraction (FMF), skeletal muscle radiodensity (SMRD), and skeletal muscle area (SMA) derived from paraspinal muscle segmentation of a single slice at L3/L4 level. The convolutional neural network (CNN) encoder of the DL model for survival prediction was pre-trained in an autoencoder setting with and without a focus on paraspinal muscles. Finally, a combination of DL and CPHM was evaluated. Performance was assessed by C-index and area under the receiver operating curve (AUC) for 1-year and 2-year survival. All methods were trained with five-fold cross-validation and were evaluated on 152 hold-out test cases. RESULTS The CNN for direct image-based survival prediction, pre-trained in a focussed autoencoder scenario, outperformed the baseline CPHM (CPHM: C-index = 0.608, 1Y-AUC = 0.606, 2Y-AUC = 0.594 vs. DL: C-index = 0.645, 1Y-AUC = 0.687, 2Y-AUC = 0.692). Combining DL and CPHM led to further improvement (C-index = 0.668, 1Y-AUC = 0.713, 2Y-AUC = 0.696). CONCLUSIONS Direct DL-based survival prediction shows potential to improve image feature extraction compared to segmentation-based scalar markers of body composition for risk assessment in TAVR patients.
Collapse
Affiliation(s)
- Maike Theis
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn, Venusberg-Campus 1, 53127 Bonn, Germany.
| | - Wolfgang Block
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn, Venusberg-Campus 1, 53127 Bonn, Germany; Department of Radiotherapy and Radiation Oncology, University Hospital Bonn, Venusberg-Campus 1, 53127 Bonn, Germany; Department of Neuroradiology, University Hospital Bonn, Venusberg-Campus 1, 53127 Bonn, Germany.
| | - Julian A Luetkens
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn, Venusberg-Campus 1, 53127 Bonn, Germany.
| | - Ulrike I Attenberger
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn, Venusberg-Campus 1, 53127 Bonn, Germany.
| | - Sebastian Nowak
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn, Venusberg-Campus 1, 53127 Bonn, Germany.
| | - Alois M Sprinkart
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn, Venusberg-Campus 1, 53127 Bonn, Germany.
| |
Collapse
|
8
|
Le VH, Minh TNT, Kha QH, Le NQK. A transfer learning approach on MRI-based radiomics signature for overall survival prediction of low-grade and high-grade gliomas. Med Biol Eng Comput 2023; 61:2699-2712. [PMID: 37432527 DOI: 10.1007/s11517-023-02875-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 06/20/2023] [Indexed: 07/12/2023]
Abstract
Lower-grade gliomas (LGG) can eventually progress to glioblastoma (GBM) and death. In the context of the transfer learning approach, we aimed to train and test an MRI-based radiomics model for predicting survival in GBM patients and validate it in LGG patients. From each patient's 704 MRI-based radiomics features, we chose seventeen optimal radiomics signatures in the GBM training set (n = 71) and used these features in both the GBM testing set (n = 31) and LGG validation set (n = 107) for further analysis. Each patient's risk score, calculated based on those optimal radiomics signatures, was chosen to represent the radiomics model. We compared the radiomics model with clinical, gene status models, and combined model integrating radiomics, clinical, and gene status in predicting survival. The average iAUCs of combined models in training, testing, and validation sets were respectively 0.804, 0.878, and 0.802, and those of radiomics models were 0.798, 0.867, and 0.717. The average iAUCs of gene status and clinical models ranged from 0.522 to 0.735 in all three sets. The radiomics model trained in GBM patients can effectively predict the overall survival of GBM and LGG patients, and the combined model improved this ability.
Collapse
Affiliation(s)
- Viet Huan Le
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 110, Taiwan
- Department of Thoracic Surgery, Khanh Hoa General Hospital, Nha Trang City, 65000, Vietnam
| | - Tran Nguyen Tuan Minh
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 110, Taiwan
| | - Quang Hien Kha
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 110, Taiwan
| | - Nguyen Quoc Khanh Le
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei, 110, Taiwan.
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei, 110, Taiwan.
- AIBioMed Research Group, Taipei Medical University, Taipei, 110, Taiwan.
- Translational Imaging Research Center, Taipei Medical University Hospital, Taipei, 110, Taiwan.
| |
Collapse
|
9
|
V R N, Chandra S S V. ExtRanFS: An Automated Lung Cancer Malignancy Detection System Using Extremely Randomized Feature Selector. Diagnostics (Basel) 2023; 13:2206. [PMID: 37443600 DOI: 10.3390/diagnostics13132206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/22/2023] [Accepted: 06/25/2023] [Indexed: 07/15/2023] Open
Abstract
Lung cancer is an abnormality where the body's cells multiply uncontrollably. The disease can be deadly if not detected in the initial stage. To address this issue, an automated lung cancer malignancy detection (ExtRanFS) framework is developed using transfer learning. We used the IQ-OTH/NCCD dataset gathered from the Iraq Hospital in 2019, encompassing CT scans of patients suffering from various lung cancers and healthy subjects. The annotated dataset consists of CT slices from 110 patients, of which 40 were diagnosed with malignant tumors and 15 with benign tumors. Fifty-five patients were determined to be in good health. All CT images are in DICOM format with a 1mm slice thickness, consisting of 80 to 200 slices at various sides and angles. The proposed system utilized a convolution-based pre-trained VGG16 model as the feature extractor and an Extremely Randomized Tree Classifier as the feature selector. The selected features are fed to the Multi-Layer Perceptron (MLP) Classifier for detecting whether the lung cancer is benign, malignant, or normal. The accuracy, sensitivity, and F1-Score of the proposed framework are 99.09%, 98.33%, and 98.33%, respectively. To evaluate the proposed model, a comparison is performed with other pre-trained models as feature extractors and also with the existing state-of-the-art methodologies as classifiers. From the experimental results, it is evident that the proposed framework outperformed other existing methodologies. This work would be beneficial to both the practitioners and the patients in identifying whether the tumor is benign, malignant, or normal.
Collapse
Affiliation(s)
- Nitha V R
- Department of Computer Science, University of Kerala, Thiruvananthapuram 695581, India
| | - Vinod Chandra S S
- Department of Computer Science, University of Kerala, Thiruvananthapuram 695581, India
| |
Collapse
|
10
|
Fogarty R, Goldgof D, Hall L, Lopez A, Johnson J, Gadara M, Stoyanova R, Punnen S, Pollack A, Pow-Sang J, Balagurunathan Y. Classifying Malignancy in Prostate Glandular Structures from Biopsy Scans with Deep Learning. Cancers (Basel) 2023; 15:cancers15082335. [PMID: 37190264 DOI: 10.3390/cancers15082335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/07/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
Histopathological classification in prostate cancer remains a challenge with high dependence on the expert practitioner. We develop a deep learning (DL) model to identify the most prominent Gleason pattern in a highly curated data cohort and validate it on an independent dataset. The histology images are partitioned in tiles (14,509) and are curated by an expert to identify individual glandular structures with assigned primary Gleason pattern grades. We use transfer learning and fine-tuning approaches to compare several deep neural network architectures that are trained on a corpus of camera images (ImageNet) and tuned with histology examples to be context appropriate for histopathological discrimination with small samples. In our study, the best DL network is able to discriminate cancer grade (GS3/4) from benign with an accuracy of 91%, F1-score of 0.91 and AUC 0.96 in a baseline test (52 patients), while the cancer grade discrimination of the GS3 from GS4 had an accuracy of 68% and AUC of 0.71 (40 patients).
Collapse
Affiliation(s)
- Ryan Fogarty
- Department of Machine Learning, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Lawrence Hall
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Alex Lopez
- Tissue Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Joseph Johnson
- Analytic Microscopy Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Manoj Gadara
- Anatomic Pathology Division, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Quest Diagnostics, Tampa, FL 33612, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Sanoj Punnen
- Desai Sethi Urology Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Alan Pollack
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Julio Pow-Sang
- Genitourinary Cancers, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | | |
Collapse
|
11
|
Mitrea DA, Brehar R, Nedevschi S, Lupsor-Platon M, Socaciu M, Badea R. Hepatocellular Carcinoma Recognition from Ultrasound Images Using Combinations of Conventional and Deep Learning Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:2520. [PMID: 36904722 PMCID: PMC10006909 DOI: 10.3390/s23052520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/07/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
Hepatocellular Carcinoma (HCC) is the most frequent malignant liver tumor and the third cause of cancer-related deaths worldwide. For many years, the golden standard for HCC diagnosis has been the needle biopsy, which is invasive and carries risks. Computerized methods are due to achieve a noninvasive, accurate HCC detection process based on medical images. We developed image analysis and recognition methods to perform automatic and computer-aided diagnosis of HCC. Conventional approaches that combined advanced texture analysis, mainly based on Generalized Co-occurrence Matrices (GCM) with traditional classifiers, as well as deep learning approaches based on Convolutional Neural Networks (CNN) and Stacked Denoising Autoencoders (SAE), were involved in our research. The best accuracy of 91% was achieved for B-mode ultrasound images through CNN by our research group. In this work, we combined the classical approaches with CNN techniques, within B-mode ultrasound images. The combination was performed at the classifier level. The CNN features obtained at the output of various convolution layers were combined with powerful textural features, then supervised classifiers were employed. The experiments were conducted on two datasets, acquired with different ultrasound machines. The best performance, above 98%, overpassed our previous results, as well as representative state-of-the-art results.
Collapse
Affiliation(s)
- Delia-Alexandrina Mitrea
- Department of Computer Science, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Raluca Brehar
- Department of Computer Science, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Sergiu Nedevschi
- Department of Computer Science, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Monica Lupsor-Platon
- Department of Medical Imaging, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania
- “Prof. Dr. O. Fodor” Regional Institute of Gastroenterology and Hepatology, 400162 Cluj-Napoca, Romania
| | - Mihai Socaciu
- Department of Medical Imaging, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania
- “Prof. Dr. O. Fodor” Regional Institute of Gastroenterology and Hepatology, 400162 Cluj-Napoca, Romania
| | - Radu Badea
- Department of Medical Imaging, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania
- “Prof. Dr. O. Fodor” Regional Institute of Gastroenterology and Hepatology, 400162 Cluj-Napoca, Romania
| |
Collapse
|
12
|
Cheng M, Lin R, Bai N, Zhang Y, Wang H, Guo M, Duan X, Zheng J, Qiu Z, Zhao Y. Deep learning for predicting the risk of immune checkpoint inhibitor-related pneumonitis in lung cancer. Clin Radiol 2023; 78:e377-e385. [PMID: 36914457 DOI: 10.1016/j.crad.2022.12.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 11/14/2022] [Accepted: 12/20/2022] [Indexed: 01/15/2023]
Abstract
AIM To develop and validate a nomogram model that combines computed tomography (CT)-based radiological factors extracted from deep-learning and clinical factors for the early predictions of immune checkpoint inhibitor-related pneumonitis (ICI-P). MATERIALS AND METHODS Forty ICI-P patients and 101 patients without ICI-P were divided randomly into the training (n=113) and test (n=28) sets. The convolution neural network (CNN) algorithm was used to extract the CT-based radiological features of predictable ICI-P and calculated the CT score of each patient. A nomogram model to predict the risk of ICI-P was developed by logistic regression. RESULTS CT score was calculated from five radiological features extracted by the residual neural network-50-V2 with feature pyramid networks. Four predictors of ICI-P in the nomogram model included a clinical feature (pre-existing lung diseases), two serum markers (absolute lymphocyte count and lactate dehydrogenase), and a CT score. The area under curve of the nomogram model in the training (0.910 versus 0.871 versus 0.778) and test (0.900 versus 0.856 versus 0.869) sets was better than the radiological and clinical models. The nomogram model showed good consistency and better clinical practicability. CONCLUSION The nomogram model that combined CT-based radiological factors and clinical factors can be used as a new non-invasive tool for the early prediction of ICI-P in lung cancer patients after immunotherapy with low cost and low manual input.
Collapse
Affiliation(s)
- M Cheng
- Department of Internal Medical Oncology, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, Heilongjiang Province, China
| | - R Lin
- College of Information and Computer Engineering, Northeast Forestry University, Harbin, Heilongjiang Province, China
| | - N Bai
- College of Information and Computer Engineering, Northeast Forestry University, Harbin, Heilongjiang Province, China
| | - Y Zhang
- Department of Internal Medical Oncology, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, Heilongjiang Province, China
| | - H Wang
- Department of Internal Medical Oncology, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, Heilongjiang Province, China
| | - M Guo
- Department of Internal Medical Oncology, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, Heilongjiang Province, China
| | - X Duan
- Department of Internal Medical Oncology, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, Heilongjiang Province, China
| | - J Zheng
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, Heilongjiang Province, China
| | - Z Qiu
- College of Information and Computer Engineering, Northeast Forestry University, Harbin, Heilongjiang Province, China
| | - Y Zhao
- Department of Internal Medical Oncology, Harbin Medical University Cancer Hospital, Harbin Medical University, Harbin, Heilongjiang Province, China.
| |
Collapse
|
13
|
Segmentation-Assisted Fully Convolutional Neural Network Enhances Deep Learning Performance to Identify Proliferative Diabetic Retinopathy. J Clin Med 2023; 12:jcm12010385. [PMID: 36615186 PMCID: PMC9821182 DOI: 10.3390/jcm12010385] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/27/2022] [Accepted: 12/29/2022] [Indexed: 01/05/2023] Open
Abstract
With the progression of diabetic retinopathy (DR) from the non-proliferative (NPDR) to proliferative (PDR) stage, the possibility of vision impairment increases significantly. Therefore, it is clinically important to detect the progression to PDR stage for proper intervention. We propose a segmentation-assisted DR classification methodology, that builds on (and improves) current methods by using a fully convolutional network (FCN) to segment retinal neovascularizations (NV) in retinal images prior to image classification. This study utilizes the Kaggle EyePacs dataset, containing retinal photographs from patients with varying degrees of DR (mild, moderate, severe NPDR and PDR. Two graders annotated the NV (a board-certified ophthalmologist and a trained medical student). Segmentation was performed by training an FCN to locate neovascularization on 669 retinal fundus photographs labeled with PDR status according to NV presence. The trained segmentation model was used to locate probable NV in images from the classification dataset. Finally, a CNN was trained to classify the combined images and probability maps into categories of PDR. The mean accuracy of segmentation-assisted classification was 87.71% on the test set (SD = 7.71%). Segmentation-assisted classification of PDR achieved accuracy that was 7.74% better than classification alone. Our study shows that segmentation assistance improves identification of the most severe stage of diabetic retinopathy and has the potential to improve deep learning performance in other imaging problems with limited data availability.
Collapse
|
14
|
Feng D, Chen X, Wang X, Mou X, Bai L, Zhang S, Zhou Z. Predicting effectiveness of anti-VEGF injection through self-supervised learning in OCT images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:2439-2458. [PMID: 36899541 DOI: 10.3934/mbe.2023114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Anti-vascular endothelial growth factor (Anti-VEGF) therapy has become a standard way for choroidal neovascularization (CNV) and cystoid macular edema (CME) treatment. However, anti-VEGF injection is a long-term therapy with expensive cost and may be not effective for some patients. Therefore, predicting the effectiveness of anti-VEGF injection before the therapy is necessary. In this study, a new optical coherence tomography (OCT) images based self-supervised learning (OCT-SSL) model for predicting the effectiveness of anti-VEGF injection is developed. In OCT-SSL, we pre-train a deep encoder-decoder network through self-supervised learning to learn the general features using a public OCT image dataset. Then, model fine-tuning is performed on our own OCT dataset to learn the discriminative features to predict the effectiveness of anti-VEGF. Finally, classifier trained by the features from fine-tuned encoder as a feature extractor is built to predict the response. Experimental results on our private OCT dataset demonstrated that the proposed OCT-SSL can achieve an average accuracy, area under the curve (AUC), sensitivity and specificity of 0.93, 0.98, 0.94 and 0.91, respectively. Meanwhile, it is found that not only the lesion region but also the normal region in OCT image is related to the effectiveness of anti-VEGF.
Collapse
Affiliation(s)
- Dehua Feng
- School of Information and Communications Engineering, Xi'an Jiaotong University, Shaanxi 710049, China
| | - Xi Chen
- School of Information and Communications Engineering, Xi'an Jiaotong University, Shaanxi 710049, China
| | - Xiaoyu Wang
- School of Information and Communications Engineering, Xi'an Jiaotong University, Shaanxi 710049, China
| | - Xuanqin Mou
- School of Information and Communications Engineering, Xi'an Jiaotong University, Shaanxi 710049, China
| | - Ling Bai
- Department of Ophthalmology, the Second Affiliated Hospital of Xi'an Jiaotong University, Shaanxi 710004, China
| | - Shu Zhang
- Department of Geriatric Surgery, the Second Affiliated Hospital of Xi'an Jiaotong University, Shaanxi 710004, China
| | - Zhiguo Zhou
- Department of Biostatistics and Data Science, University of Kansas Medical Center, KS 66160, USA
| |
Collapse
|
15
|
Mridha MF, Prodeep AR, Hoque ASMM, Islam MR, Lima AA, Kabir MM, Hamid MA, Watanobe Y. A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
Affiliation(s)
- M. F. Mridha
- Department of Computer Science and Engineering, American International University Bangladesh, Dhaka 1229, Bangladesh
| | - Akibur Rahman Prodeep
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - A. S. M. Morshedul Hoque
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Abdul Hamid
- Department of Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
16
|
Pham TD, Ravi V, Fan C, Luo B, Sun XF. Classification of IHC Images of NATs With ResNet-FRP-LSTM for Predicting Survival Rates of Rectal Cancer Patients. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 11:87-95. [PMID: 36704244 PMCID: PMC9870269 DOI: 10.1109/jtehm.2022.3229561] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 11/06/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND Over a decade, tissues dissected adjacent to primary tumors have been considered "normal" or healthy samples (NATs). However, NATs have recently been discovered to be distinct from both tumorous and normal tissues. The ability to predict the survival rate of cancer patients using NATs can open a new door to selecting optimal treatments for cancer and discovering biomarkers. METHODS This paper introduces an artificial intelligence (AI) approach that uses NATs for predicting the 5-year survival of pre-operative radiotherapy patients with rectal cancer. The new approach combines pre-trained deep learning, nonlinear dynamics, and long short-term memory to classify immunohistochemical images of RhoB protein expression on NATs. RESULTS Ten-fold cross-validation results show 88% accuracy of prediction obtained from the new approach, which is also higher than those provided from baseline methods. CONCLUSION Preliminary results not only add objective evidence to recent findings of NATs' molecular characteristics using state-of-the-art AI methods, but also contribute to the discovery of RhoB expression on NATs in rectal-cancer patients. CLINICAL IMPACT The ability to predict the survival rate of cancer patients is extremely important for clinical decision-making. The proposed AI tool is promising for assisting oncologists in their treatments of rectal cancer patients.
Collapse
Affiliation(s)
- Tuan D Pham
- Center for Artificial IntelligencePrince Mohammad Bin Fahd University Khobar 31952 Saudi Arabia
| | - Vinayakumar Ravi
- Center for Artificial IntelligencePrince Mohammad Bin Fahd University Khobar 31952 Saudi Arabia
| | - Chuanwen Fan
- Department of OncologyLinkoping University 58185 Linkoping Sweden
- Department of Biomedical and Clinical SciencesLinkoping University 58185 Linkoping Sweden
| | - Bin Luo
- Department of OncologyLinkoping University 58185 Linkoping Sweden
- Department of Biomedical and Clinical SciencesLinkoping University 58185 Linkoping Sweden
- Department of Gastrointestinal SurgerySichuan Provincial People's Hospital Chengdu 610032 China
| | - Xiao-Feng Sun
- Department of OncologyLinkoping University 58185 Linkoping Sweden
- Department of Biomedical and Clinical SciencesLinkoping University 58185 Linkoping Sweden
| |
Collapse
|
17
|
Artificial Intelligence assisted discrimination between pulmonary tuberculous nodules and solid lung cancer nodules. CLINICAL EHEALTH 2022. [DOI: 10.1016/j.ceh.2022.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
|
18
|
Christie JR, Daher O, Abdelrazek M, Romine PE, Malthaner RA, Qiabi M, Nayak R, Napel S, Nair VS, Mattonen SA. Predicting recurrence risks in lung cancer patients using multimodal radiomics and random survival forests. J Med Imaging (Bellingham) 2022; 9:066001. [PMID: 36388142 PMCID: PMC9641263 DOI: 10.1117/1.jmi.9.6.066001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 10/12/2022] [Indexed: 11/09/2022] Open
Abstract
Purpose We developed a model integrating multimodal quantitative imaging features from tumor and nontumor regions, qualitative features, and clinical data to improve the risk stratification of patients with resectable non-small cell lung cancer (NSCLC). Approach We retrospectively analyzed 135 patients [mean age, 69 years (43 to 87, range); 100 male patients and 35 female patients] with NSCLC who underwent upfront surgical resection between 2008 and 2012. The tumor and peritumoral regions on both preoperative CT and FDG PET-CT and the vertebral bodies L3 to L5 on FDG PET were segmented to assess the tumor and bone marrow uptake, respectively. Radiomic features were extracted and combined with clinical and CT qualitative features. A random survival forest model was developed using the top-performing features to predict the time to recurrence/progression in the training cohort ( n = 101 ), validated in the testing cohort ( n = 34 ) using the concordance, and compared with a stage-only model. Patients were stratified into high- and low-risks of recurrence/progression using Kaplan-Meier analysis. Results The model, consisting of stage, three wavelet texture features, and three wavelet first-order features, achieved a concordance of 0.78 and 0.76 in the training and testing cohorts, respectively, significantly outperforming the baseline stage-only model results of 0.67 ( p < 0.005 ) and 0.60 ( p = 0.008 ), respectively. Patients at high- and low-risks of recurrence/progression were significantly stratified in both the training ( p < 0.005 ) and the testing ( p = 0.03 ) cohorts. Conclusions Our radiomic model, consisting of stage and tumor, peritumoral, and bone marrow features from CT and FDG PET-CT significantly stratified patients into low- and high-risk of recurrence/progression.
Collapse
Affiliation(s)
- Jaryd R. Christie
- Western University, Department of Medical Biophysics, London, Ontario, Canada
- London Regional Cancer Program, Baines Imaging Research Laboratory, London, Ontario, Canada
| | - Omar Daher
- Western University, Department of Medical Imaging, London, Ontario, Canada
| | - Mohamed Abdelrazek
- Western University, Department of Medical Imaging, London, Ontario, Canada
| | - Perrin E. Romine
- Fred Hutchinson Cancer Research Center, Clinical Research Division, Seattle, Washington, United States
- University of Washington School of Medicine, Division of Medical Oncology, Seattle, Washington, United States
| | - Richard A. Malthaner
- Western University, Division of Thoracic Surgery, Department of Surgery, London, Ontario, Canada
| | - Mehdi Qiabi
- Western University, Division of Thoracic Surgery, Department of Surgery, London, Ontario, Canada
| | - Rahul Nayak
- Western University, Division of Thoracic Surgery, Department of Surgery, London, Ontario, Canada
| | - Sandy Napel
- Stanford University, Department of Radiology, Stanford, California, United States
| | - Viswam S. Nair
- Fred Hutchinson Cancer Research Center, Clinical Research Division, Seattle, Washington, United States
- University of Washington School of Medicine, Division of Pulmonary and Critical Care Medicine, Seattle, Washington, United States
| | - Sarah A. Mattonen
- Western University, Department of Medical Biophysics, London, Ontario, Canada
- London Regional Cancer Program, Baines Imaging Research Laboratory, London, Ontario, Canada
- Western University, Department of Oncology, London, Ontario, Canada
| |
Collapse
|
19
|
Guan X, Lu N, Zhang J. Evaluation of Epidermal Growth Factor Receptor 2 Status in Gastric Cancer by CT-Based Deep Learning Radiomics Nomogram. Front Oncol 2022; 12:905203. [PMID: 35898877 PMCID: PMC9309372 DOI: 10.3389/fonc.2022.905203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 06/21/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose To explore the role of computed tomography (CT)-based deep learning and radiomics in preoperative evaluation of epidermal growth factor receptor 2 (HER2) status in gastric cancer. Materials and methods The clinical data on gastric cancer patients were evaluated retrospectively, and 357 patients were chosen for this study (training cohort: 249; test cohort: 108). The preprocessed enhanced CT arterial phase images were selected for lesion segmentation, radiomics and deep learning feature extraction. We integrated deep learning features and radiomic features (Inte). Four methods were used for feature selection. We constructed models with support vector machine (SVM) or random forest (RF), respectively. The area under the receiver operating characteristics curve (AUC) was used to assess the performance of these models. We also constructed a nomogram including Inte-feature scores and clinical factors. Results The radiomics-SVM model showed good classification performance (AUC, training cohort: 0.8069; test cohort: 0.7869). The AUC of the ResNet50-SVM model and the Inte-SVM model in the test cohort were 0.8955 and 0.9055. The nomogram also showed excellent discrimination achieving greater AUC (training cohort, 0.9207; test cohort, 0.9224). Conclusion CT-based deep learning radiomics nomogram can accurately and effectively assess the HER2 status in patients with gastric cancer before surgery and it is expected to assist physicians in clinical decision-making and facilitates individualized treatment planning.
Collapse
Affiliation(s)
- Xiao Guan
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, Nanjing Medical University, Nanjing, China
| | - Na Lu
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, Nanjing Medical University, Nanjing, China
| | | |
Collapse
|
20
|
New Optimized Deep Learning Application for COVID-19 Detection in Chest X-ray Images. Symmetry (Basel) 2022. [DOI: 10.3390/sym14051003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Due to false negative results of the real-time Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR) test, the complemental practices such as computed tomography (CT) and X-ray in combination with RT-PCR are discussed to achieve a more accurate diagnosis of COVID-19 in clinical practice. Since radiology includes visual understanding as well as decision making under limited conditions such as uncertainty, urgency, patient burden, and hospital facilities, mistakes are inevitable. Therefore, there is an immediate requirement to carry out further investigation and develop new accurate detection and identification methods to provide automatically quantitative evaluation of COVID-19. In this paper, we propose a new computer-aided diagnosis application for COVID-19 detection using deep learning techniques. A new technique, which receives symmetric X-ray data as the input, is presented in this study by combining Convolutional Neural Networks (CNN) with Ant Lion Optimization Algorithm (ALO) and Multiclass Naïve Bayes Classifier (NB). Moreover, several other classifiers such as Softmax, Support Vector Machines (SVM), K-Nearest Neighbors (KNN) and Decision Tree (DT) are combined with CNN. The promising results of these classifiers are evaluated and presented for accuracy, precision, and F1-score metrics. NB classifier with Ant Lion Optimization Algorithm and CNN produced the best results with 98.31% accuracy, 100% precision and 98.25% F1-score and with the lowest execution time.
Collapse
|
21
|
Solah M, Huang H, Sheng J, Feng T, Pomplun M, Yu LF. Mood-Driven Colorization of Virtual Indoor Scenes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2058-2068. [PMID: 35167476 DOI: 10.1109/tvcg.2022.3150513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
One of the challenging tasks in virtual scene design for Virtual Reality (VR) is causing it to invoke a particular mood in viewers. The subjective nature of moods brings uncertainty to the purpose. We propose a novel approach to automatic adjustment of the colors of textures for objects in a virtual indoor scene, enabling it to match a target mood. A dataset of 25,000 images, including building/home interiors, was used to train a classifier with the features extracted via deep learning. It contributes to an optimization process that colorizes virtual scenes automatically according to the target mood. Our approach was tested on four different indoor scenes, and we conducted a user study demonstrating its efficacy through statistical analysis with the focus on the impact of the scenes experienced with a VR headset.
Collapse
|
22
|
Rim Enhancement after Technically Successful Transarterial Chemoembolization in Hepatocellular Carcinoma: A Potential Mimic of Incomplete Embolization or Reactive Hyperemia? Tomography 2022; 8:1148-1158. [PMID: 35448728 PMCID: PMC9028792 DOI: 10.3390/tomography8020094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 04/12/2022] [Accepted: 04/13/2022] [Indexed: 11/25/2022] Open
Abstract
Contrast enhancement at the margins/rim of embolization areas in hepatocellular-carcinoma (HCC) lesions treated with transarterial chemoembolization (TACE) might be an early prognostic indicator for HCC recurrence. The aim of this study was to evaluate the predictive value of rim perfusion for TACE recurrence as determined by perfusion CT (PCT). A total of 52 patients (65.6 ± 9.3 years) underwent PCT directly before, immediately after (within 48 h) and at follow-up (95.3 ± 12.5 days) after TACE. Arterial-liver perfusion (ALP), portal-venous perfusion (PVP) and hepatic-perfusion index (HPI) were evaluated in normal liver parenchyma, and on the embolization rim as well as the tumor bed. A total of 42 lesions were successfully treated, and PCT measurements showed no residually vascularized tumor areas. Embolization was not entirely successful in 10 patients with remaining arterialized focal nodular areas (ALP 34.7 ± 10.1 vs. 4.4 ± 5.3 mL/100 mL/min, p < 0.0001). Perfusion values at the TACE rim were lower in responders compared to normal adjacent liver parenchyma and edges of incompletely embolized tumors (ALP liver 16.3 ± 10.1 mL/100 mL/min, rim responder 8.8 ± 8.7 mL/100 mL/min, rim non-responder 23.4 ± 8.6 mL/100 mL/min, p = 0.005). At follow-up, local tumor relapse was observed in 17/42, and 15/42 showed no recurrence (ALP 39.1 ± 10.1 mL/100 mL/min vs. 10.0 ± 7.4 mL/100 mL/min, p = 0.0008); four patients had de novo disseminated disease and six patients were lost in follow-up. Rim perfusion was lower compared to adjacent recurring HCC and not different between groups. HCC lesions showed no rim perfusion after TACE, neither immediately after nor at follow-up at three months, both for mid-term responders and mid-term relapsing HCCs, indicating that rim enhancement is not a sign of reactive hyperemia and not predictive of early HCC recurrence.
Collapse
|
23
|
Face Recognition Based on Deep Learning and FPGA for Ethnicity Identification. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12052605] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In the last decade, there has been a surge of interest in addressing complex Computer Vision (CV) problems in the field of face recognition (FR). In particular, one of the most difficult ones is based on the accurate determination of the ethnicity of mankind. In this regard, a new classification method using Machine Learning (ML) tools is proposed in this paper. Specifically, a new Deep Learning (DL) approach based on a Deep Convolutional Neural Network (DCNN) model is developed, which outperforms a reliable determination of the ethnicity of people based on their facial features. However, it is necessary to make use of specialized high-performance computing (HPC) hardware to build a workable DCNN-based FR system due to the low computation power given by the current central processing units (CPUs). Recently, the latter approach has increased the efficiency of the network in terms of power usage and execution time. Then, the usage of field-programmable gate arrays (FPGAs) was considered in this work. The performance of the new DCNN-based FR method using FPGA was compared against that using graphics processing units (GPUs). The experimental results considered an image dataset composed of 3141 photographs of citizens from three distinct countries. To our knowledge, this is the first image collection gathered specifically to address the ethnicity identification problem. Additionally, the ethnicity dataset was made publicly available as a novel contribution to this work. Finally, the experimental results proved the high performance provided by the proposed DCNN model using FPGAs, achieving an accuracy level of 96.9 percent and an F1 score of 94.6 percent while using a reasonable amount of energy and hardware resources.
Collapse
|
24
|
Parida PK, Dora L, Swain M, Agrawal S, Panda R. Data science methodologies in smart healthcare: a review. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00648-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
25
|
Jones MA, Faiz R, Qiu Y, Zheng B. Improving mammography lesion classification by optimal fusion of handcrafted and deep transfer learning features. Phys Med Biol 2022; 67:10.1088/1361-6560/ac5297. [PMID: 35130517 PMCID: PMC8935657 DOI: 10.1088/1361-6560/ac5297] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 02/07/2022] [Indexed: 12/20/2022]
Abstract
Objective.Handcrafted radiomics features or deep learning model-generated automated features are commonly used to develop computer-aided diagnosis schemes of medical images. The objective of this study is to test the hypothesis that handcrafted and automated features contain complementary classification information and fusion of these two types of features can improve CAD performance.Approach.We retrospectively assembled a dataset involving 1535 lesions (740 malignant and 795 benign). Regions of interest (ROI) surrounding suspicious lesions are extracted and two types of features are computed from each ROI. The first one includes 40 radiomic features and the second one includes automated features computed from a VGG16 network using a transfer learning method. A single channel ROI image is converted to three channel pseudo-ROI images by stacking the original image, a bilateral filtered image, and a histogram equalized image. Two VGG16 models using pseudo-ROIs and 3 stacked original ROIs without pre-processing are used to extract automated features. Five linear support vector machines (SVM) are built using the optimally selected feature vectors from the handcrafted features, two sets of VGG16 model-generated automated features, and the fusion of handcrafted and each set of automated features, respectively.Main Results.Using a 10-fold cross-validation, the fusion SVM using pseudo-ROIs yields the highest lesion classification performance with area under ROC curve (AUC = 0.756 ± 0.042), which is significantly higher than those yielded by other SVMs trained using handcrafted or automated features only (p < 0.05).Significance.This study demonstrates that both handcrafted and automated futures contain useful information to classify breast lesions. Fusion of these two types of features can further increase CAD performance.
Collapse
Affiliation(s)
- Meredith A. Jones
- School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Rowzat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
26
|
Atkins KM, Weiss J, Zeleznik R, Bitterman DS, Chaunzwa TL, Huynh E, Guthier C, Kozono DE, Lewis JH, Tamarappoo BK, Nohria A, Hoffmann U, Aerts HJWL, Mak RH. Elevated Coronary Artery Calcium Quantified by a Validated Deep Learning Model From Lung Cancer Radiotherapy Planning Scans Predicts Mortality. JCO Clin Cancer Inform 2022; 6:e2100095. [PMID: 35084935 DOI: 10.1200/cci.21.00095] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Coronary artery calcium (CAC) quantified on computed tomography (CT) scans is a robust predictor of atherosclerotic coronary disease; however, the feasibility and relevance of quantitating CAC from lung cancer radiotherapy planning CT scans is unknown. We used a previously validated deep learning (DL) model to assess whether CAC is a predictor of all-cause mortality and major adverse cardiac events (MACEs). METHODS Retrospective analysis of non-contrast-enhanced radiotherapy planning CT scans from 428 patients with locally advanced lung cancer is performed. The DL-CAC algorithm was previously trained on 1,636 cardiac-gated CT scans and tested on four clinical trial cohorts. Plaques ≥ 1 cubic millimeter were measured to generate an Agatston-like DL-CAC score and grouped as DL-CAC = 0 (very low risk) and DL-CAC ≥ 1 (elevated risk). Cox and Fine and Gray regressions were adjusted for lung cancer and cardiovascular factors. RESULTS The median follow-up was 18.1 months. The majority (61.4%) had a DL-CAC ≥ 1. There was an increased risk of all-cause mortality with DL-CAC ≥ 1 versus DL-CAC = 0 (adjusted hazard ratio, 1.51; 95% CI, 1.01 to 2.26; P = .04), with 2-year estimates of 56.2% versus 45.4%, respectively. There was a trend toward increased risk of major adverse cardiac events with DL-CAC ≥ 1 versus DL-CAC = 0 (hazard ratio, 1.80; 95% CI, 0.87 to 3.74; P = .11), with 2-year estimates of 7.3% versus 1.2%, respectively. CONCLUSION In this proof-of-concept study, CAC was effectively measured from routinely acquired radiotherapy planning CT scans using an automated model. Elevated CAC, as predicted by the DL model, was associated with an increased risk of mortality, suggesting a potential benefit for automated cardiac risk screening before cancer therapy begins.
Collapse
Affiliation(s)
- Katelyn M Atkins
- Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, CA.,Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | - Jakob Weiss
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA.,Department of Diagnostic and Interventional Radiology, University Hospital, Freiburg, Germany
| | - Roman Zeleznik
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Danielle S Bitterman
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Tafadzwa L Chaunzwa
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Elizabeth Huynh
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | - Christian Guthier
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | - David E Kozono
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | - John H Lewis
- Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, CA
| | | | - Anju Nohria
- Department of Cardiovascular Medicine, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | - Udo Hoffmann
- Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA.,Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | - Hugo J W L Aerts
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA.,Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands
| | - Raymond H Mak
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|
27
|
|
28
|
Balagurunathan Y, Beers A, McNitt-Gray M, Hadjiiski L, Napel S, Goldgof D, Perez G, Arbelaez P, Mehrtash A, Kapur T, Yang E, Moon JW, Bernardino G, Delgado-Gonzalo R, Farhangi MM, Amini AA, Ni R, Feng X, Bagari A, Vaidhya K, Veasey B, Safta W, Frigui H, Enguehard J, Gholipour A, Castillo LS, Daza LA, Pinsky P, Kalpathy-Cramer J, Farahani K. Lung Nodule Malignancy Prediction in Sequential CT Scans: Summary of ISBI 2018 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3748-3761. [PMID: 34264825 PMCID: PMC9531053 DOI: 10.1109/tmi.2021.3097665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 - 0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p =.05 with Bonferroni-Holm's correction).
Collapse
Affiliation(s)
| | | | | | | | - Sandy Napel
- Dept. of Radiology, School of Medicine, Stanford University (SU), CA
| | | | - Gustavo Perez
- Biomedical computer vision lab (BCV), Universidad de los Andes, Colombia
| | - Pablo Arbelaez
- Biomedical computer vision lab (BCV), Universidad de los Andes, Colombia
| | - Alireza Mehrtash
- Robotics and Control Laboratory (RCL), Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC
- Surgical Planning Laboratory (SPL), Radiology Department, Brigham and Women’s Hospital, Boston, MA, 02130
| | - Tina Kapur
- Surgical Planning Laboratory (SPL), Radiology Department, Brigham and Women’s Hospital, Boston, MA, 02130
| | - Ehwa Yang
- Sungkyunkwan University School of Medicine, Seoul 06351, Korea
| | - Jung Won Moon
- Human Medical Imaging & Intervention Center, Seoul 06524, Korea
| | - Gabriel Bernardino
- Centre Suisse d’Électronique et de Microtechnique, Neuchâtel, Switzerland
| | | | - M. Mehdi Farhangi
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Computer Engineering and Computer Science, University of Louisville
| | - Amir A. Amini
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Electrical and Computer Engineering Department, University of Louisville, Louisville, KY. USA
| | | | - Xue Feng
- Spingbok Inc
- Department of Biomedical Engineering, University of Virginia, Charlottesville
| | | | | | - Benjamin Veasey
- Medical Imaging Laboratory, University of Louisville, Louisville, KY. USA
- Electrical and Computer Engineering Department, University of Louisville, Louisville, KY. USA
| | - Wiem Safta
- Computer Engineering and Computer Science, University of Louisville
| | - Hichem Frigui
- Computer Engineering and Computer Science, University of Louisville
| | - Joseph Enguehard
- Department of Radiology, Boston Children’s Hospital, and Harvard Medical School
| | - Ali Gholipour
- Department of Radiology, Boston Children’s Hospital, and Harvard Medical School
| | | | - Laura Alexandra Daza
- Department of Biomedical Engineering, Universidad de los Andes, Bogota, Colombia
| | - Paul Pinsky
- Divsion of Cancer Prevention, National Cancer Institute (NCI), Washington DC
| | | | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute (NCI), Washington DC
| |
Collapse
|
29
|
Rezaeijo SM, Ghorvei M, Abedi-Firouzjah R, Mojtahedi H, Entezari Zarch H. Detecting COVID-19 in chest images based on deep transfer learning and machine learning algorithms. EGYPTIAN JOURNAL OF RADIOLOGY AND NUCLEAR MEDICINE 2021. [PMCID: PMC8193170 DOI: 10.1186/s43055-021-00524-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Abstract
Background
This study aimed to propose an automatic prediction of COVID-19 disease using chest CT images based on deep transfer learning models and machine learning (ML) algorithms.
Results
The dataset consisted of 5480 samples in two classes, including 2740 CT chest images of patients with confirmed COVID-19 and 2740 images of suspected cases was assessed. The DenseNet201 model has obtained the highest training with an accuracy of 100%. In combining pre-trained models with ML algorithms, the DenseNet201 model and KNN algorithm have received the best performance with an accuracy of 100%. Created map by t-SNE in the DenseNet201 model showed not any points clustered with the wrong class.
Conclusions
The mentioned models can be used in remote places, in low- and middle-income countries, and laboratory equipment with limited resources to overcome a shortage of radiologists.
Collapse
|
30
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
31
|
Aiello M, Esposito G, Pagliari G, Borrelli P, Brancato V, Salvatore M. How does DICOM support big data management? Investigating its use in medical imaging community. Insights Imaging 2021; 12:164. [PMID: 34748101 PMCID: PMC8574146 DOI: 10.1186/s13244-021-01081-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 08/25/2021] [Indexed: 12/15/2022] Open
Abstract
The diagnostic imaging field is experiencing considerable growth, followed by increasing production of massive amounts of data. The lack of standardization and privacy concerns are considered the main barriers to big data capitalization. This work aims to verify whether the advanced features of the DICOM standard, beyond imaging data storage, are effectively used in research practice. This issue will be analyzed by investigating the publicly shared medical imaging databases and assessing how much the most common medical imaging software tools support DICOM in all its potential. Therefore, 100 public databases and ten medical imaging software tools were selected and examined using a systematic approach. In particular, the DICOM fields related to privacy, segmentation and reporting have been assessed in the selected database; software tools have been evaluated for reading and writing the same DICOM fields. From our analysis, less than a third of the databases examined use the DICOM format to record meaningful information to manage the images. Regarding software, the vast majority does not allow the management, reading and writing of some or all the DICOM fields. Surprisingly, if we observe chest computed tomography data sharing to address the COVID-19 emergency, there are only two datasets out of 12 released in DICOM format. Our work shows how the DICOM can potentially fully support big data management; however, further efforts are still needed from the scientific and technological community to promote the use of the existing standard, encouraging data sharing and interoperability for a concrete development of big data analytics.
Collapse
Affiliation(s)
- Marco Aiello
- IRCCS SDN, Via Emanuele Gianturco 113, 80143, Naples, Italy.
| | | | | | | | | | | |
Collapse
|
32
|
Chen W, Hou X, Hu Y, Huang G, Ye X, Nie S. A deep learning- and CT image-based prognostic model for the prediction of survival in non-small cell lung cancer. Med Phys 2021; 48:7946-7958. [PMID: 34661294 DOI: 10.1002/mp.15302] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 09/19/2021] [Accepted: 10/10/2021] [Indexed: 12/19/2022] Open
Abstract
OBJECTIVE To assist clinicians in arranging personalized treatment, planning follow-up programs and extending survival times for non-small cell lung cancer (NSCLC) patients, a method of deep learning combined with computed tomography (CT) imaging for survival prediction was designed. METHODS Data were collected from 484 patients from four research centers. The data from 344 patients were utilized to build the A_CNN survival prognosis model to classify 2-year overall survival time ranges (730 days cut-off). Data from 140 patients, including independent internal and external test sets, were utilized for model testing. First, a series of preprocessing techniques were used to process the original CT images and generate training and test data sets from the axial, coronal, and sagittal planes. Second, the structure of the A_CNN model was designed based on asymmetric convolution, bottleneck blocks, the uniform cross-entropy (UC) loss function, and other advanced techniques. After that, the A_CNN model was trained, and numerous comparative experiments were designed to obtain the best prognostic survival model. Last, the model performance was evaluated, and the predicted survival curves were analyzed. RESULTS The A_CNN survival prognosis model yielded a high patient-level accuracy of 88.8%, a patch-level accuracy of 82.9%, and an area under the receiver operating characteristic (ROC) curve (AUC) of 0.932. When tested on an external data set, the maximum patient-level accuracy was 80.0%. CONCLUSIONS The results suggest that using a deep learning method can improve prognosis in patients with NSCLC and has important application value in establishing individualized prognostic models.
Collapse
Affiliation(s)
- Wen Chen
- School of Medical Imaging, Shanghai University of Medicine & Health Science, Shanghai, China
| | - Xuewen Hou
- School of Medical Imaging, Shanghai University of Medicine & Health Science, Shanghai, China
| | - Ying Hu
- School of Medical Imaging, Shanghai University of Medicine & Health Science, Shanghai, China
| | - Gang Huang
- Department of Radiology, Shanghai Chest Hospital, Shanghai, China
| | - Xiaodan Ye
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Shengdong Nie
- School of Medical Imaging, Shanghai University of Medicine & Health Science, Shanghai, China
| |
Collapse
|
33
|
Scapicchio C, Gabelloni M, Barucci A, Cioni D, Saba L, Neri E. A deep look into radiomics. LA RADIOLOGIA MEDICA 2021; 126:1296-1311. [PMID: 34213702 PMCID: PMC8520512 DOI: 10.1007/s11547-021-01389-x] [Citation(s) in RCA: 172] [Impact Index Per Article: 57.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 06/15/2021] [Indexed: 11/29/2022]
Abstract
Radiomics is a process that allows the extraction and analysis of quantitative data from medical images. It is an evolving field of research with many potential applications in medical imaging. The purpose of this review is to offer a deep look into radiomics, from the basis, deeply discussed from a technical point of view, through the main applications, to the challenges that have to be addressed to translate this process in clinical practice. A detailed description of the main techniques used in the various steps of radiomics workflow, which includes image acquisition, reconstruction, pre-processing, segmentation, features extraction and analysis, is here proposed, as well as an overview of the main promising results achieved in various applications, focusing on the limitations and possible solutions for clinical implementation. Only an in-depth and comprehensive description of current methods and applications can suggest the potential power of radiomics in fostering precision medicine and thus the care of patients, especially in cancer detection, diagnosis, prognosis and treatment evaluation.
Collapse
Affiliation(s)
- Camilla Scapicchio
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma 67, 56126, Pisa, Italy.
| | - Michela Gabelloni
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma 67, 56126, Pisa, Italy
| | - Andrea Barucci
- CNR-IFAC Institute of Applied Physics "N. Carrara", 50019, Sesto Fiorentino, Italy
| | - Dania Cioni
- Academic Radiology, Department of Surgical, Medical, Molecular Pathology and Emergency Medicine, University of Pisa, Via Roma 67, 56126, Pisa, Italy
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), Monserrato (Cagliari),Cagliari, Italy
| | - Emanuele Neri
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma 67, 56126, Pisa, Italy
- Italian Society of Medical and Interventional Radiology, SIRM Foundation, Via della Signora 2, 20122, Milano, Italy
| |
Collapse
|
34
|
Mao J, Akhtar J, Zhang X, Sun L, Guan S, Li X, Chen G, Liu J, Jeon HN, Kim MS, No KT, Wang G. Comprehensive strategies of machine-learning-based quantitative structure-activity relationship models. iScience 2021; 24:103052. [PMID: 34553136 PMCID: PMC8441174 DOI: 10.1016/j.isci.2021.103052] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Early quantitative structure-activity relationship (QSAR) technologies have unsatisfactory versatility and accuracy in fields such as drug discovery because they are based on traditional machine learning and interpretive expert features. The development of Big Data and deep learning technologies significantly improve the processing of unstructured data and unleash the great potential of QSAR. Here we discuss the integration of wet experiments (which provide experimental data and reliable verification), molecular dynamics simulation (which provides mechanistic interpretation at the atomic/molecular levels), and machine learning (including deep learning) techniques to improve QSAR models. We first review the history of traditional QSAR and point out its problems. We then propose a better QSAR model characterized by a new iterative framework to integrate machine learning with disparate data input. Finally, we discuss the application of QSAR and machine learning to many practical research fields, including drug development and clinical trials.
Collapse
Affiliation(s)
- Jiashun Mao
- The Interdisciplinary Graduate Program in Integrative Biotechnology and Translational Medicine, Yonsei University, Incheon 21983, Republic of Korea
- Department of Biology, School of Life Sciences, Southern University of Science and Technology, 1088 Xueyuan Avenue, Shenzhen, Guangdong 518055, China
- Guangdong Provincial Key Laboratory of Computational Science and Material Design, Shenzhen, Guangdong 518055 China
| | - Javed Akhtar
- Department of Biology, School of Life Sciences, Southern University of Science and Technology, 1088 Xueyuan Avenue, Shenzhen, Guangdong 518055, China
- Guangdong Provincial Key Laboratory of Cell Microenvironment and Disease Research, Shenzhen, Guangdong 518055, China
| | - Xiao Zhang
- Shanghai Rural Commercial Bank Co., Ltd, Shanghai 200002, China
| | - Liang Sun
- Department of Physics, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong, China
| | - Shenghui Guan
- Department of Biology, School of Life Sciences, Southern University of Science and Technology, 1088 Xueyuan Avenue, Shenzhen, Guangdong 518055, China
- Guangdong Provincial Key Laboratory of Computational Science and Material Design, Shenzhen, Guangdong 518055 China
| | - Xinyu Li
- School of Life and Health Sciences and Warshel Institute for Computational Biology, The Chinese University of Hong Kong, Shenzhen 518172, China
| | - Guangming Chen
- Department of Biology, School of Life Sciences, Southern University of Science and Technology, 1088 Xueyuan Avenue, Shenzhen, Guangdong 518055, China
- Guangdong Provincial Key Laboratory of Cell Microenvironment and Disease Research, Shenzhen, Guangdong 518055, China
| | - Jiaxin Liu
- Biotechnology, College of Life Science and Biotechnology, Yonsei University, Seoul 03722, Republic of Korea
| | - Hyeon-Nae Jeon
- Biotechnology, College of Life Science and Biotechnology, Yonsei University, Seoul 03722, Republic of Korea
| | - Min Sung Kim
- Biotechnology, College of Life Science and Biotechnology, Yonsei University, Seoul 03722, Republic of Korea
| | - Kyoung Tai No
- The Interdisciplinary Graduate Program in Integrative Biotechnology and Translational Medicine, Yonsei University, Incheon 21983, Republic of Korea
| | - Guanyu Wang
- Department of Biology, School of Life Sciences, Southern University of Science and Technology, 1088 Xueyuan Avenue, Shenzhen, Guangdong 518055, China
- Guangdong Provincial Key Laboratory of Computational Science and Material Design, Shenzhen, Guangdong 518055 China
- Guangdong Provincial Key Laboratory of Cell Microenvironment and Disease Research, Shenzhen, Guangdong 518055, China
| |
Collapse
|
35
|
Stefanopoulos S, Ayoub S, Qiu Q, Ren G, Osman M, Nazzal M, Ahmed A. Machine learning prediction of diabetic foot ulcers in the inpatient population. Vascular 2021; 30:1115-1123. [PMID: 34461765 DOI: 10.1177/17085381211040984] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
BACKGROUND The objective of this study was to create an algorithm that could predict diabetic foot ulcer (DFU) incidence in the in-patient population. MATERIALS AND METHODS The Nationwide Inpatient Sample datasets were examined from 2008 to 2014. The International Classification of Diseases 9th Edition Clinical Modification (ICD-9-CM) and the Agency for Healthcare Research and Quality comorbidity codes were used to assist in the data collection. Chi-square testing was conducted, using variables that positively correlated with DFUs. For descriptive statistics, the Student T-test, Wilcoxon rank sum test, and chi-square test were used. There were six predictive variables that were identified. A decision tree model CTREE was utilized to help develop an algorithm. RESULTS 326,853 patients were noted to have DFU. The major variables that contributed to this diagnosis (both with p < 0.001) were cellulitis (OR 63.87, 95% CI [63.87-64.49]) and Charcot joint (OR 25.64, 95% CI [25.09-26.20]). The model performance of the six-variable testing data was 79.5% (80.6% sensitivity and 78.3% specificity). The area under the curve (AUC) for the 6-variable model was 0.88. CONCLUSION We developed an algorithm with a 79.8% accuracy that could predict the likelihood of developing a DFU.
Collapse
Affiliation(s)
- Stavros Stefanopoulos
- Department of Surgery, College of Medicine and Life Sciences, 7923University of Toledo, Toledo, OH, USA
| | - Samar Ayoub
- Department of Surgery, College of Medicine and Life Sciences, 7923University of Toledo, Toledo, OH, USA
| | - Qiong Qiu
- Department of Surgery, College of Medicine and Life Sciences, 7923University of Toledo, Toledo, OH, USA
| | - Gang Ren
- Department of Surgery, College of Medicine and Life Sciences, 7923University of Toledo, Toledo, OH, USA
| | - Mohamed Osman
- Department of Surgery, College of Medicine and Life Sciences, 7923University of Toledo, Toledo, OH, USA
| | - Munier Nazzal
- Department of Surgery, College of Medicine and Life Sciences, 7923University of Toledo, Toledo, OH, USA
| | - Ayman Ahmed
- Department of Surgery, College of Medicine and Life Sciences, 7923University of Toledo, Toledo, OH, USA
| |
Collapse
|
36
|
Mahmood U, Shrestha R, Bates DDB, Mannelli L, Corrias G, Erdi YE, Kanan C. Detecting Spurious Correlations With Sanity Tests for Artificial Intelligence Guided Radiology Systems. Front Digit Health 2021; 3:671015. [PMID: 34713144 PMCID: PMC8521929 DOI: 10.3389/fdgth.2021.671015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 06/29/2021] [Indexed: 11/23/2022] Open
Abstract
Artificial intelligence (AI) has been successful at solving numerous problems in machine perception. In radiology, AI systems are rapidly evolving and show progress in guiding treatment decisions, diagnosing, localizing disease on medical images, and improving radiologists' efficiency. A critical component to deploying AI in radiology is to gain confidence in a developed system's efficacy and safety. The current gold standard approach is to conduct an analytical validation of performance on a generalization dataset from one or more institutions, followed by a clinical validation study of the system's efficacy during deployment. Clinical validation studies are time-consuming, and best practices dictate limited re-use of analytical validation data, so it is ideal to know ahead of time if a system is likely to fail analytical or clinical validation. In this paper, we describe a series of sanity tests to identify when a system performs well on development data for the wrong reasons. We illustrate the sanity tests' value by designing a deep learning system to classify pancreatic cancer seen in computed tomography scans.
Collapse
Affiliation(s)
- Usman Mahmood
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Robik Shrestha
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, United States
| | - David D. B. Bates
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Lorenzo Mannelli
- Institute of Research and Medical Care (IRCCS) SDN, Institute of Diagnostic and Nuclear Research, Naples, Italy
| | - Giuseppe Corrias
- Department of Radiology, University of Cagliari, Cagliari, Italy
| | - Yusuf Emre Erdi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Christopher Kanan
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, United States
| |
Collapse
|
37
|
Le VH, Kha QH, Hung TNK, Le NQK. Risk Score Generated from CT-Based Radiomics Signatures for Overall Survival Prediction in Non-Small Cell Lung Cancer. Cancers (Basel) 2021; 13:cancers13143616. [PMID: 34298828 PMCID: PMC8304936 DOI: 10.3390/cancers13143616] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 07/14/2021] [Accepted: 07/16/2021] [Indexed: 12/17/2022] Open
Abstract
Simple Summary Despite recent advancements in lung cancer treatment, individuals with lung cancer have a dismal 5-year survival rate of only 15%. In patients with non-small cell lung cancer (NSCLC), medical images have lately been employed as a valuable marker for predicting overall survival. The primary goal of this study was to develop a risk score based on computed tomography (CT) based radiomics feature signatures that may be used to predict survival in NSCLC patients. After analyzing 577 NSCLC patients from two data sets, we discovered that the risk score model’s prediction ability as a prognostic indicator was superior to other clinical indicators (age, stage, and gender), and the possibility of patient risk stratification with survival was evaluated using a risk score representation of 10 radiomics signatures. According to this study, the risk score generated using CT-based radiomics signatures promises to predict overall survival in NSCLC patients. Abstract This study aimed to create a risk score generated from CT-based radiomics signatures that could be used to predict overall survival in patients with non-small cell lung cancer (NSCLC). We retrospectively enrolled three sets of NSCLC patients (including 336, 84, and 157 patients for training, testing, and validation set, respectively). A total of 851 radiomics features for each patient from CT images were extracted for further analyses. The most important features (strongly linked with overall survival) were chosen by pairwise correlation analysis, Least Absolute Shrinkage and Selection Operator (LASSO) regression model, and univariate Cox proportional hazard regression. Multivariate Cox proportional hazard model survival analysis was used to create risk scores for each patient, and Kaplan–Meier was used to separate patients into two groups: high-risk and low-risk, respectively. ROC curve assessed the prediction ability of the risk score model for overall survival compared to clinical parameters. The risk score, which developed from ten radiomics signatures model, was found to be independent of age, gender, and stage for predicting overall survival in NSCLC patients (HR, 2.99; 95% CI, 2.27–3.93; p < 0.001) and overall survival prediction ability was 0.696 (95% CI, 0.635–0.758), 0.705 (95% CI, 0.649–0.762), 0.657 (95% CI, 0.589–0.726) (AUC) for 1, 3, and 5 years, respectively, in the training set. The risk score is more likely to have a better accuracy in predicting survival at 1, 3, and 5 years than clinical parameters, such as age 0.57 (95% CI, 0.499–0.64), 0.552 (95% CI, 0.489–0.616), 0.621 (95% CI, 0.544–0.689) (AUC); gender 0.554, 0.546, 0.566 (AUC); stage 0.527, 0.501, 0.459 (AUC), respectively, in 1, 3 and 5 years in the training set. In the training set, the Kaplan–Meier curve revealed that NSCLC patients in the high-risk group had a lower overall survival time than the low-risk group (p < 0.001). We also had similar results that were statistically significant in the testing and validation set. In conclusion, risk scores developed from ten radiomics signatures models have great potential to predict overall survival in NSCLC patients compared to the clinical parameters. This model was able to stratify NSCLC patients into high-risk and low-risk groups regarding the overall survival prediction.
Collapse
Affiliation(s)
- Viet-Huan Le
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan; (V.-H.L.); (Q.-H.K.); (T.N.K.H.)
- Department of Thoracic Surgery, Khanh Hoa General Hospital, Nha Trang City 65000, Vietnam
| | - Quang-Hien Kha
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan; (V.-H.L.); (Q.-H.K.); (T.N.K.H.)
| | - Truong Nguyen Khanh Hung
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan; (V.-H.L.); (Q.-H.K.); (T.N.K.H.)
- Department of Orthopedic and Trauma, Cho Ray Hospital, Ho Chi Minh City 70000, Vietnam
| | - Nguyen Quoc Khanh Le
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan; (V.-H.L.); (Q.-H.K.); (T.N.K.H.)
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei 106, Taiwan
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei 106, Taiwan
- Translational Imaging Research Center, Taipei Medical University Hospital, Taipei 110, Taiwan
- Correspondence: ; Tel.: +886-2-66382736 (ext. 1992); Fax: +886-02-27321956
| |
Collapse
|
38
|
Chang R, Qi S, Yue Y, Zhang X, Song J, Qian W. Predictive Radiomic Models for the Chemotherapy Response in Non-Small-Cell Lung Cancer based on Computerized-Tomography Images. Front Oncol 2021; 11:646190. [PMID: 34307127 PMCID: PMC8293296 DOI: 10.3389/fonc.2021.646190] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Accepted: 06/16/2021] [Indexed: 01/10/2023] Open
Abstract
The heterogeneity and complexity of non-small cell lung cancer (NSCLC) tumors mean that NSCLC patients at the same stage can have different chemotherapy prognoses. Accurate predictive models could recognize NSCLC patients likely to respond to chemotherapy so that they can be given personalized and effective treatment. We propose to identify predictive imaging biomarkers from pre-treatment CT images and construct a radiomic model that can predict the chemotherapy response in NSCLC. This single-center cohort study included 280 NSCLC patients who received first-line chemotherapy treatment. Non-contrast CT images were taken before and after the chemotherapy, and clinical information were collected. Based on the Response Evaluation Criteria in Solid Tumors and clinical criteria, the responses were classified into two categories: response (n = 145) and progression (n = 135), then all data were divided into two cohorts: training cohort (224 patients) and independent test cohort (56 patients). In total, 1629 features characterizing the tumor phenotype were extracted from a cube containing the tumor lesion cropped from the pre-chemotherapy CT images. After dimensionality reduction, predictive models of the chemotherapy response of NSCLC with different feature selection methods and different machine-learning classifiers (support vector machine, random forest, and logistic regression) were constructed. For the independent test cohort, the predictive model based on a random-forest classifier with 20 radiomic features achieved the best performance, with an accuracy of 85.7% and an area under the receiver operating characteristic curve of 0.941 (95% confidence interval, 0.898–0.982). Of the 20 selected features, four were first-order statistics of image intensity and the others were texture features. For nine features, there were significant differences between the response and progression groups (p < 0.001). In the response group, three features, indicating heterogeneity, were overrepresented and one feature indicating homogeneity was underrepresented. The proposed radiomic model with pre-chemotherapy CT features can predict the chemotherapy response of patients with non-small cell lung cancer. This radiomic model can help to stratify patients with NSCLC, thereby offering the prospect of better treatment.
Collapse
Affiliation(s)
- Runsheng Chang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Xiaoye Zhang
- Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Jiangdian Song
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wei Qian
- Department of Electrical and Computer Engineering, University of Texas at El Paso, El Paso, TX, United States
| |
Collapse
|
39
|
Paul R, Shafiq-Ul Hassan M, Moros EG, Gillies RJ, Hall LO, Goldgof DB. Deep Feature Stability Analysis Using CT Images of a Physical Phantom Across Scanner Manufacturers, Cartridges, Pixel Sizes, and Slice Thickness. ACTA ACUST UNITED AC 2021; 6:250-260. [PMID: 32548303 PMCID: PMC7289258 DOI: 10.18383/j.tom.2020.00003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Image acquisition parameters for computed tomography scans such as slice thickness and field of view may vary depending on tumor size and site. Recent studies have shown that some radiomics features were dependent on voxel size (= pixel size × slice thickness), and with proper normalization, this voxel size dependency could be reduced. Deep features from a convolutional neural network (CNN) have shown great promise in characterizing cancers. However, how do these deep features vary with changes in imaging acquisition parameters? To analyze the variability of deep features, a physical radiomics phantom with 10 different material cartridges was scanned on 8 different scanners. We assessed scans from 3 different cartridges (rubber, dense cork, and normal cork). Deep features from the penultimate layer of the CNN before (pre-rectified linear unit) and after (post-rectified linear unit) applying the rectified linear unit activation function were extracted from a pre-trained CNN using transfer learning. We studied both the interscanner and intrascanner dependency of deep features and also the deep features' dependency over the 3 cartridges. We found some deep features were dependent on pixel size and that, with appropriate normalization, this dependency could be reduced. False discovery rate was applied for multiple comparisons, to mitigate potentially optimistic results. We also used stable deep features for prognostic analysis on 1 non-small cell lung cancer data set.
Collapse
Affiliation(s)
- Rahul Paul
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL
| | | | - Eduardo G Moros
- Departments of Cancer Physiology; and.,Radiation Oncology, H. L. Moffitt Cancer Center & Research Institute, Tampa, FL
| | | | - Lawrence O Hall
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL
| | - Dmitry B Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL
| |
Collapse
|
40
|
Smith BJ, Buatti JM, Bauer C, Ulrich EJ, Ahmadvand P, Budzevich MM, Gillies RJ, Goldgof D, Grkovski M, Hamarneh G, Kinahan PE, Muzi JP, Muzi M, Laymon CM, Mountz JM, Nehmeh S, Oborski MJ, Zhao B, Sunderland JJ, Beichel RR. Multisite Technical and Clinical Performance Evaluation of Quantitative Imaging Biomarkers from 3D FDG PET Segmentations of Head and Neck Cancer Images. ACTA ACUST UNITED AC 2021; 6:65-76. [PMID: 32548282 PMCID: PMC7289247 DOI: 10.18383/j.tom.2020.00004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Quantitative imaging biomarkers (QIBs) provide medical image-derived intensity, texture, shape, and size features that may help characterize cancerous tumors and predict clinical outcomes. Successful clinical translation of QIBs depends on the robustness of their measurements. Biomarkers derived from positron emission tomography images are prone to measurement errors owing to differences in image processing factors such as the tumor segmentation method used to define volumes of interest over which to calculate QIBs. We illustrate a new Bayesian statistical approach to characterize the robustness of QIBs to different processing factors. Study data consist of 22 QIBs measured on 47 head and neck tumors in 10 positron emission tomography/computed tomography scans segmented manually and with semiautomated methods used by 7 institutional members of the NCI Quantitative Imaging Network. QIB performance is estimated and compared across institutions with respect to measurement errors and power to recover statistical associations with clinical outcomes. Analysis findings summarize the performance impact of different segmentation methods used by Quantitative Imaging Network members. Robustness of some advanced biomarkers was found to be similar to conventional markers, such as maximum standardized uptake value. Such similarities support current pursuits to better characterize disease and predict outcomes by developing QIBs that use more imaging information and are robust to different processing factors. Nevertheless, to ensure reproducibility of QIB measurements and measures of association with clinical outcomes, errors owing to segmentation methods need to be reduced.
Collapse
Affiliation(s)
| | | | | | - Ethan J Ulrich
- Electrical and Computer Engineering.,Biomedical Engineering, The University of Iowa, Iowa City, IA
| | - Payam Ahmadvand
- School of Computing Science, Simon Fraser University, Burnaby, Canada
| | - Mikalai M Budzevich
- H. Lee Moffitt Cancer Center & Research Institute, Department of Cancer Physiology, FL
| | - Robert J Gillies
- H. Lee Moffitt Cancer Center & Research Institute, Department of Cancer Physiology, FL
| | - Dmitry Goldgof
- Department of Computer Science and Engineering, University of South Florida, FL
| | - Milan Grkovski
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Ghassan Hamarneh
- School of Computing Science, Simon Fraser University, Burnaby, Canada
| | - Paul E Kinahan
- Department of Radiology, The University of Washington Medical Center, Seattle, WA
| | - John P Muzi
- Department of Radiology, The University of Washington Medical Center, Seattle, WA
| | - Mark Muzi
- Department of Radiology, The University of Washington Medical Center, Seattle, WA
| | - Charles M Laymon
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA.,Department of Radiology, University of Pittsburgh, Pittsburgh, PA
| | - James M Mountz
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA
| | - Sadek Nehmeh
- Department of Radiology, Weill Cornell Medical College, NY; and
| | - Matthew J Oborski
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA
| | - Binsheng Zhao
- Department of Radiology, Columbia University Medical Center, New York, NY
| | | | | |
Collapse
|
41
|
Xuan R, Li T, Wang Y, Xu J, Jin W. Prenatal prediction and typing of placental invasion using MRI deep and radiomic features. Biomed Eng Online 2021; 20:56. [PMID: 34090428 PMCID: PMC8180077 DOI: 10.1186/s12938-021-00893-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 05/25/2021] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND To predict placental invasion (PI) and determine the subtype according to the degree of implantation, and to help physicians develop appropriate therapeutic measures, a prenatal prediction and typing of placental invasion method using MRI deep and radiomic features were proposed. METHODS The placental tissue of abdominal magnetic resonance (MR) image was segmented to form the regions of interest (ROI) using U-net. The radiomic features were subsequently extracted from ROI. Simultaneously, a deep dynamic convolution neural network (DDCNN) with codec structure was established, which was trained by an autoencoder model to extract the deep features from ROI. Finally, combining the radiomic features and deep features, a classifier based on the multi-layer perceptron model was designed. The classifier was trained to predict prenatal placental invasion as well as determine the invasion subtype. RESULTS The experimental results show that the average accuracy, sensitivity, and specificity of the proposed method are 0.877, 0.857, and 0.954 respectively, and the area under the ROC curve (AUC) is 0.904, which outperforms the traditional radiomic based auxiliary diagnostic methods. CONCLUSIONS This work not only labeled the placental tissue of MR image in pregnant women automatically but also realized the objective evaluation of placental invasion, thus providing a new approach for the prenatal diagnosis of placental invasion.
Collapse
Affiliation(s)
- Rongrong Xuan
- Affiliated Hospital of Medical School, Ningbo University, Ningbo, 315020, Zhejiang, China
| | - Tao Li
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, 315211, Zhejiang, China
| | - Yutao Wang
- Affiliated Hospital of Medical School, Ningbo University, Ningbo, 315020, Zhejiang, China
| | - Jian Xu
- Ningbo Women's and Children's Hospital, Ningbo, 315012, Zhejiang, China
| | - Wei Jin
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, 315211, Zhejiang, China.
| |
Collapse
|
42
|
Samala RK, Chan HP, Hadjiiski L, Helvie MA. Risks of feature leakage and sample size dependencies in deep feature extraction for breast mass classification. Med Phys 2021; 48:2827-2837. [PMID: 33368376 PMCID: PMC8601676 DOI: 10.1002/mp.14678] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 11/27/2020] [Accepted: 12/06/2020] [Indexed: 12/20/2022] Open
Abstract
PURPOSE Transfer learning is commonly used in deep learning for medical imaging to alleviate the problem of limited available data. In this work, we studied the risk of feature leakage and its dependence on sample size when using pretrained deep convolutional neural network (DCNN) as feature extractor for classification breast masses in mammography. METHODS Feature leakage occurs when the training set is used for feature selection and classifier modeling while the cost function is guided by the validation performance or informed by the test performance. The high-dimensional feature space extracted from pretrained DCNN suffers from the curse of dimensionality; feature subsets that can provide excessively optimistic performance can be found for the validation set or test set if the latter is allowed for unlimited reuse during algorithm development. We designed a simulation study to examine feature leakage when using DCNN as feature extractor for mass classification in mammography. Four thousand five hundred and seventy-seven unique mass lesions were partitioned by patient into three sets: 3222 for training, 508 for validation, and 847 for independent testing. Three pretrained DCNNs, AlexNet, GoogLeNet, and VGG16, were first compared using a training set in fourfold cross validation and one was selected as the feature extractor. To assess generalization errors, the independent test set was sequestered as truly unseen cases. A training set of a range of sizes from 10% to 75% was simulated by random drawing from the available training set in addition to 100% of the training set. Three commonly used feature classifiers, the linear discriminant, the support vector machine, and the random forest were evaluated. A sequential feature selection method was used to find feature subsets that could achieve high classification performance in terms of the area under the receiver operating characteristic curve (AUC) in the validation set. The extent of feature leakage and the impact of training set size were analyzed by comparison to the performance in the unseen test set. RESULTS All three classifiers showed large generalization error between the validation set and the independent sequestered test set at all sample sizes. The generalization error decreased as the sample size increased. At 100% of the sample size, one classifier achieved an AUC as high as 0.91 on the validation set while the corresponding performance on the unseen test set only reached an AUC of 0.72. CONCLUSIONS Our results demonstrate that large generalization errors can occur in AI tools due to feature leakage. Without evaluation on unseen test cases, optimistically biased performance may be reported inadvertently, and can lead to unrealistic expectations and reduce confidence for clinical implementation.
Collapse
Affiliation(s)
- Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | | | - Mark A Helvie
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
43
|
Pattern classification for breast lesion on FFDM by integration of radiomics and deep features. Comput Med Imaging Graph 2021; 90:101922. [PMID: 34049119 DOI: 10.1016/j.compmedimag.2021.101922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 01/28/2021] [Accepted: 04/05/2021] [Indexed: 11/21/2022]
Abstract
The radiomics model can be used in breast cancer detection via calculating quantitative image features. However, these features are explicitly designed, or handcrafted in advance, and this would limit their ability to characterize the lesion properly. This paper aims to build an integrated-features-based classification framework which cooperate the radiomics features and the deep features to classify benign and malignant breast lesions on full-filed digital mammography (FFDM). We propose a classification framework consists of three steps: (1) handcrafted features (HCFs) extraction and selection, (2) deep features (DFs) extraction and (3) the integrated features-based classification. Specifically, HCFs comprise the gray-level gap-length matrix (GLGLM) texture features and shape features, and DFs contain the pooled features and high-level fully-connected features. Then, a multi-classifier method is applied to construct our classification framework using integrated features for breast lesion classification. A total of 106 retrospective FFDM data (51 are malignant and 55 are benign) in both craniocaudal (CC) view and mediolateral oblique (MLO) view were included in this study. The areas under a receiver operating characteristic curve (AUC) value, accuracy, sensitivity, specificity and Youden's index, are used to examine the performance of our proposed method in differentiating benign and malignant breast lesion. Proposed framework trained on the concatenation of fully-connected features and HCFs can significantly improve classification performance (AUC of 94.6 %, accuracy of 96.4 %, sensitivity of 93.6 %, specificity of 98.9 % and Yonden's index of 92.5 %) compared with other features sets. Experimental results demonstrate that performance of proposed framework is improved, indicating the potential of concatenation of the fully-connected features and HCFs set in breast cancer patients.
Collapse
|
44
|
Caballo M, Hernandez AM, Lyu SH, Teuwen J, Mann RM, van Ginneken B, Boone JM, Sechopoulos I. Computer-aided diagnosis of masses in breast computed tomography imaging: deep learning model with combined handcrafted and convolutional radiomic features. J Med Imaging (Bellingham) 2021; 8:024501. [PMID: 33796604 DOI: 10.1117/1.jmi.8.2.024501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 03/12/2021] [Indexed: 12/30/2022] Open
Abstract
Purpose: A computer-aided diagnosis (CADx) system for breast masses is proposed, which incorporates both handcrafted and convolutional radiomic features embedded into a single deep learning model. Approach: The model combines handcrafted and convolutional radiomic signatures into a multi-view architecture, which retrieves three-dimensional (3D) image information by simultaneously processing multiple two-dimensional mass patches extracted along different planes through the 3D mass volume. Each patch is processed by a stream composed of two concatenated parallel branches: a multi-layer perceptron fed with automatically extracted handcrafted radiomic features, and a convolutional neural network, for which discriminant features are learned from the input patches. All streams are then concatenated together into a final architecture, where all network weights are shared and the learning occurs simultaneously for each stream and branch. The CADx system was developed and tested for diagnosis of breast masses ( N = 284 ) using image datasets acquired with independent dedicated breast computed tomography systems from two different institutions. The diagnostic classification performance of the CADx system was compared against other machine and deep learning architectures adopting handcrafted and convolutional approaches, and three board-certified breast radiologists. Results: On a test set of 82 masses (45 benign, 37 malignant), the proposed CADx system performed better than all other model architectures evaluated, with an increase in the area under the receiver operating characteristics curve (AUC) of 0.05 ± 0.02 , and achieving a final AUC of 0.947, outperforming the three radiologists ( AUC = 0.814 - 0.902 ). Conclusions: In conclusion, the system demonstrated its potential usefulness in breast cancer diagnosis by improving mass malignancy assessment.
Collapse
Affiliation(s)
- Marco Caballo
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Andrew M Hernandez
- University of California Davis, Department of Radiology, Sacramento, California, United States
| | - Su Hyun Lyu
- University of California Davis, Department of Biomedical Engineering, Sacramento, California, United States
| | - Jonas Teuwen
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands.,The Netherlands Cancer Institute, Department of Radiation Oncology, Amsterdam, The Netherlands
| | - Ritse M Mann
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands.,The Netherlands Cancer Institute, Department of Radiology, Amsterdam, The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - John M Boone
- University of California Davis, Department of Radiology, Sacramento, California, United States.,University of California Davis, Department of Biomedical Engineering, Sacramento, California, United States
| | - Ioannis Sechopoulos
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands.,Dutch Expert Center for Screening, Nijmegen, The Netherlands
| |
Collapse
|
45
|
Bizzego A, Gabrieli G, Esposito G. Deep Neural Networks and Transfer Learning on a Multivariate Physiological Signal Dataset. Bioengineering (Basel) 2021; 8:35. [PMID: 33800842 PMCID: PMC8058952 DOI: 10.3390/bioengineering8030035] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 02/25/2021] [Accepted: 03/03/2021] [Indexed: 11/16/2022] Open
Abstract
While Deep Neural Networks (DNNs) and Transfer Learning (TL) have greatly contributed to several medical and clinical disciplines, the application to multivariate physiological datasets is still limited. Current examples mainly focus on one physiological signal and can only utilise applications that are customised for that specific measure, thus it limits the possibility of transferring the trained DNN to other domains. In this study, we composed a dataset (n=813) of six different types of physiological signals (Electrocardiogram, Electrodermal activity, Electromyogram, Photoplethysmogram, Respiration and Acceleration). Signals were collected from 232 subjects using four different acquisition devices. We used a DNN to classify the type of physiological signal and to demonstrate how the TL approach allows the exploitation of the efficiency of DNNs in other domains. After the DNN was trained to optimally classify the type of signal, the features that were automatically extracted by the DNN were used to classify the type of device used for the acquisition using a Support Vector Machine. The dataset, the code and the trained parameters of the DNN are made publicly available to encourage the adoption of DNN and TL in applications with multivariate physiological signals.
Collapse
Affiliation(s)
- Andrea Bizzego
- Department of Psychology and Cognitive Science, University of Trento, 38068 Rovereto (Trento), Italy;
| | - Giulio Gabrieli
- Psychology Program, School of Social Sciences, Nanyang Technological University, Singapore 639798, Singapore;
| | - Gianluca Esposito
- Department of Psychology and Cognitive Science, University of Trento, 38068 Rovereto (Trento), Italy;
- Psychology Program, School of Social Sciences, Nanyang Technological University, Singapore 639798, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
| |
Collapse
|
46
|
Wang J, Zhu H, Wang SH, Zhang YD. A Review of Deep Learning on Medical Image Analysis. MOBILE NETWORKS AND APPLICATIONS 2021; 26:351-380. [DOI: 10.1007/s11036-020-01672-7] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 08/30/2023]
|
47
|
Vali-Betts E, Krause KJ, Dubrovsky A, Olson K, Graff JP, Mitra A, Datta-Mitra A, Beck K, Tsirigos A, Loomis C, Neto AG, Adler E, Rashidi HH. Effects of Image Quantity and Image Source Variation on Machine Learning Histology Differential Diagnosis Models. J Pathol Inform 2021; 12:5. [PMID: 34012709 PMCID: PMC8112343 DOI: 10.4103/jpi.jpi_69_20] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Revised: 09/28/2020] [Accepted: 10/28/2020] [Indexed: 12/13/2022] Open
Abstract
Aims Histology, the microscopic study of normal tissues, is a crucial element of most medical curricula. Learning tools focused on histology are very important to learners who seek diagnostic competency within this important diagnostic arena. Recent developments in machine learning (ML) suggest that certain ML tools may be able to benefit this histology learning platform. Here, we aim to explore how one such tool based on a convolutional neural network, can be used to build a generalizable multi-classification model capable of classifying microscopic images of human tissue samples with the ultimate goal of providing a differential diagnosis (a list of look-alikes) for each entity. Methods We obtained three institutional training datasets and one generalizability test dataset, each containing images of histologic tissues in 38 categories. Models were trained on data from single institutions, low quantity combinations of multiple institutions, and high quantity combinations of multiple institutions. Models were tested against withheld validation data, external institutional data, and generalizability test images obtained from Google image search. Performance was measured with macro and micro accuracy, sensitivity, specificity, and f1-score. Results In this study, we were able to show that such a model's generalizability is dependent on both the training data source variety and the total number of training images used. Models which were trained on 760 images from only a single institution performed well on withheld internal data but poorly on external data (lower generalizability). Increasing data source diversity improved generalizability, even when decreasing data quantity: models trained on 684 images, but from three sources improved generalization accuracy between 4.05% and 18.59%. Maintaining this diversity and increasing the quantity of training images to 2280 further improved generalization accuracy between 16.51% and 32.79%. Conclusions This pilot study highlights the significance of data diversity within such studies. As expected, optimal models are those that incorporate both diversity and quantity into their platforms.s.
Collapse
Affiliation(s)
- Elham Vali-Betts
- Department of Pathology and Laboratory Medicine, University of California Davis School of Medicine, Sacramento, CA, USA
| | - Kevin J Krause
- Department of Pathology and Laboratory Medicine, University of California Davis School of Medicine, Sacramento, CA, USA
| | - Alanna Dubrovsky
- Department of Psychiatry, Oregon Health and Science University, Portland, OR, USA
| | - Kristin Olson
- Department of Pathology and Laboratory Medicine, University of California Davis School of Medicine, Sacramento, CA, USA
| | - John Paul Graff
- Department of Pathology and Laboratory Medicine, University of California Davis School of Medicine, Sacramento, CA, USA
| | - Anupam Mitra
- Department of Pathology and Laboratory Medicine, University of California Davis School of Medicine, Sacramento, CA, USA
| | - Ananya Datta-Mitra
- Department of Pathology and Laboratory Medicine, University of California Davis School of Medicine, Sacramento, CA, USA
| | - Kenneth Beck
- Department of Pathology and Laboratory Medicine, University of California Davis School of Medicine, Sacramento, CA, USA
| | - Aristotelis Tsirigos
- Department of Psychiatry, School of Medicine, New York University, New York, NY, USA
| | - Cynthia Loomis
- Department of Psychiatry, School of Medicine, New York University, New York, NY, USA
| | | | - Esther Adler
- Department of Psychiatry, School of Medicine, New York University, New York, NY, USA
| | - Hooman H Rashidi
- Department of Pathology and Laboratory Medicine, University of California Davis School of Medicine, Sacramento, CA, USA
| |
Collapse
|
48
|
Lv W, Song Y, Fu R, Lin X, Su Y, Jin X, Yang H, Shan X, Du W, Huang Q, Zhong H, Jiang K, Zhang Z, Wang L, Huang G. Deep Learning Algorithm for Automated Detection of Polycystic Ovary Syndrome Using Scleral Images. Front Endocrinol (Lausanne) 2021; 12:789878. [PMID: 35154003 PMCID: PMC8828568 DOI: 10.3389/fendo.2021.789878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 12/15/2021] [Indexed: 11/15/2022] Open
Abstract
The high prevalence of polycystic ovary syndrome (PCOS) among reproductive-aged women has attracted more and more attention. As a common disorder that is likely to threaten women's health physically and mentally, the detection of PCOS is a growing public health concern worldwide. In this paper, we proposed an automated deep learning algorithm for the auxiliary detection of PCOS, which explores the potential of scleral changes in PCOS detection. The algorithm was applied to the dataset that contains the full-eye images of 721 Chinese women, among which 388 are PCOS patients. Inputs of the proposed algorithm are scleral images segmented from full-eye images using an improved U-Net, and then a Resnet model was applied to extract deep features from scleral images. Finally, a multi-instance model was developed to achieve classification. Various performance indices such as AUC, classification accuracy, precision, recall, precision, and F1-score were adopted to assess the performance of our algorithm. Results show that our method achieves an average AUC of 0.979 and a classification accuracy of 0.929, which indicates the great potential of deep learning in the detection of PCOS.
Collapse
Affiliation(s)
- Wenqi Lv
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Ying Song
- Reproductive Medicine Centre, Peking University Third Hospital, Beijing, China
| | - Rongxin Fu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xue Lin
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Ya Su
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xiangyu Jin
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Han Yang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xiaohui Shan
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Wenli Du
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Qin Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Hao Zhong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Kai Jiang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Zhi Zhang
- National Engineering Research Center for Beijing Biochip Technology, Beijing, China
- *Correspondence: Zhi Zhang, ; Lina Wang, ; Guoliang Huang,
| | - Lina Wang
- Reproductive Medicine Centre, Peking University Third Hospital, Beijing, China
- *Correspondence: Zhi Zhang, ; Lina Wang, ; Guoliang Huang,
| | - Guoliang Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
- National Engineering Research Center for Beijing Biochip Technology, Beijing, China
- *Correspondence: Zhi Zhang, ; Lina Wang, ; Guoliang Huang,
| |
Collapse
|
49
|
Abstract
Carrying out large multicenter studies is one of the key goals to be achieved towards a faster transfer of the radiomics approach in the clinical setting. This requires large-scale radiomics data analysis, hence the need for integrating radiomic features extracted from images acquired in different centers. This is challenging as radiomic features exhibit variable sensitivity to differences in scanner model, acquisition protocols and reconstruction settings, which is similar to the so-called 'batch-effects' in genomics studies. In this review we discuss existing methods to perform data integration with the aid of reducing the unwanted variation associated with batch effects. We also discuss the future potential role of deep learning methods in providing solutions for addressing radiomic multicentre studies.
Collapse
Affiliation(s)
- R Da-Ano
- LaTiM, INSERM, UMR 1101, Univ Brest, Brest, France
| | - D Visvikis
- LaTiM, INSERM, UMR 1101, Univ Brest, Brest, France
- equally contributed
| | - M Hatt
- LaTiM, INSERM, UMR 1101, Univ Brest, Brest, France
- equally contributed
| |
Collapse
|
50
|
Sollini M, Bartoli F, Marciano A, Zanca R, Slart RHJA, Erba PA. Artificial intelligence and hybrid imaging: the best match for personalized medicine in oncology. Eur J Hybrid Imaging 2020; 4:24. [PMID: 34191197 PMCID: PMC8218106 DOI: 10.1186/s41824-020-00094-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 11/26/2020] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence (AI) refers to a field of computer science aimed to perform tasks typically requiring human intelligence. Currently, AI is recognized in the broader technology radar within the five key technologies which emerge for their wide-ranging applications and impact in communities, companies, business, and value chain framework alike. However, AI in medical imaging is at an early phase of development, and there are still hurdles to take related to reliability, user confidence, and adoption. The present narrative review aimed to provide an overview on AI-based approaches (distributed learning, statistical learning, computer-aided diagnosis and detection systems, fully automated image analysis tool, natural language processing) in oncological hybrid medical imaging with respect to clinical tasks (detection, contouring and segmentation, prediction of histology and tumor stage, prediction of mutational status and molecular therapies targets, prediction of treatment response, and outcome). Particularly, AI-based approaches have been briefly described according to their purpose and, finally lung cancer-being one of the most extensively malignancy studied by hybrid medical imaging-has been used as illustrative scenario. Finally, we discussed clinical challenges and open issues including ethics, validation strategies, effective data-sharing methods, regulatory hurdles, educational resources, and strategy to facilitate the interaction among different stakeholders. Some of the major changes in medical imaging will come from the application of AI to workflow and protocols, eventually resulting in improved patient management and quality of life. Overall, several time-consuming tasks could be automatized. Machine learning algorithms and neural networks will permit sophisticated analysis resulting not only in major improvements in disease characterization through imaging, but also in the integration of multiple-omics data (i.e., derived from pathology, genomic, proteomics, and demographics) for multi-dimensional disease featuring. Nevertheless, to accelerate the transition of the theory to practice a sustainable development plan considering the multi-dimensional interactions between professionals, technology, industry, markets, policy, culture, and civil society directed by a mindset which will allow talents to thrive is necessary.
Collapse
Affiliation(s)
- Martina Sollini
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele (Milan), Italy
- Humanitas Clinical and Research Center, Rozzano (Milan), Italy
| | - Francesco Bartoli
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Andrea Marciano
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Roberta Zanca
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Riemer H J A Slart
- University Medical Center Groningen, Medical Imaging Center, University of Groningen, Groningen, The Netherlands
- Faculty of Science and Technology, Biomedical Photonic Imaging, University of Twente, Enschede, The Netherlands
| | - Paola A Erba
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.
- University Medical Center Groningen, Medical Imaging Center, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|