1
|
Tejaswi VSD, Rachapudi V. Liver Cancer Diagnosis: Enhanced Deep Maxout Model with Improved Feature Set. Cancer Invest 2024; 42:710-725. [PMID: 39189645 DOI: 10.1080/07357907.2024.2391359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 08/08/2024] [Indexed: 08/28/2024]
Abstract
This work proposed a liver cancer classification scheme that includes Preprocessing, Feature extraction, and classification stages. The source images are pre-processed using Gaussian filtering. For segmentation, this work proposes a LUV transformation-based adaptive thresholding-based segmentation process. After the segmentation, certain features are extracted that include multi-texon based features, Improved Local Ternary Pattern (LTP-based features), and GLCM features during this phase. In the Classification phase, an improved Deep Maxout model is proposed for liver cancer detection. The adopted scheme is evaluated over other schemes based on various metrics. While the learning rate is 60%, an improved deep maxout model achieved a higher F-measure value (0.94) for classifying liver cancer; however, the previous method like Support Vector Machine (SVM), Random Forest (RF), Recurrent Neural Network (RNN), Long Short Term Memory (LSTM), K-Nearest Neighbor (KNN), Deep maxout, Convolutional Neural Network (CNN), and DL model holds less F-measure value. An improved deep maxout model achieved minimal False Positive Rate (FPR), and False Negative Rate (FNR) values with the best outcomes compared to other existing models for liver cancer classification.
Collapse
Affiliation(s)
| | - Venubabu Rachapudi
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India
| |
Collapse
|
2
|
Liang Y, Wu H, Wei X. Development and validation of a CT-based nomogram for accurate hepatocellular carcinoma detection in high risk patients. Front Oncol 2024; 14:1374373. [PMID: 39165686 PMCID: PMC11333883 DOI: 10.3389/fonc.2024.1374373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 07/18/2024] [Indexed: 08/22/2024] Open
Abstract
Purpose To establish and validate a CT-based nomogram for accurately detecting HCC in patients at high risk for the disease. Methods A total of 223 patients were divided into training (n=161) and validation (n=62) cohorts between January of 2017 and May of 2022. Logistic analysis was performed, and clinical model and radiological model were developed separately. Finally, a nomogram was established based on clinical and radiological features. All models were evaluated using the area under the curve (AUC). DeLong's test was used to evaluate the differences among these models. Results In the multivariate analysis, gender (p = 0.014), increased Alpha-fetoprotein (AFP) (p = 0.017), non-rim arterial phase hyperenhancement (APHE) (p = 0.011), washout (p = 0.011), and enhancing capsule (p = 0.001) were the independent differential predictors of HCC. A nomogram was formed with well-fitted calibration curves based on these five factors. The area under the curve (AUC) of the nomogram in the training and validation cohorts was 0.961(95%CI: 0.935~0.986) and 0.979 (95% CI: 0.949~1), respectively. The nomogram outperformed the clinical and the radiological models in training and validation cohorts. Conclusion The nomogram incorporating clinical and CT features can be a simple and reliable tool for detecting HCC and achieving risk stratification in patients at high risk for HCC.
Collapse
Affiliation(s)
- Yingying Liang
- The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China
- Department of Radiology, Guangzhou First People’s Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, China
| | - Hongzhen Wu
- Department of Radiology, Guangzhou First People’s Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, China
| | - Xinhua Wei
- The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China
- Department of Radiology, Guangzhou First People’s Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, China
| |
Collapse
|
3
|
Shah AA, Daud A, Bukhari A, Alshemaimri B, Ahsan M, Younis R. DEL-Thyroid: deep ensemble learning framework for detection of thyroid cancer progression through genomic mutation. BMC Med Inform Decis Mak 2024; 24:198. [PMID: 39039464 DOI: 10.1186/s12911-024-02604-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/10/2024] [Indexed: 07/24/2024] Open
Abstract
Genes, expressed as sequences of nucleotides, are susceptible to mutations, some of which can lead to cancer. Machine learning and deep learning methods have emerged as vital tools in identifying mutations associated with cancer. Thyroid cancer ranks as the 5th most prevalent cancer in the USA, with thousands diagnosed annually. This paper presents an ensemble learning model leveraging deep learning techniques such as Long Short-Term Memory (LSTM), Gated Recurrent Units (GRUs), and Bi-directional LSTM (Bi-LSTM) to detect thyroid cancer mutations early. The model is trained on a dataset sourced from asia.ensembl.org and IntOGen.org, consisting of 633 samples with 969 mutations across 41 genes, collected from individuals of various demographics. Feature extraction encompasses techniques including Hahn moments, central moments, raw moments, and various matrix-based methods. Evaluation employs three testing methods: self-consistency test (SCT), independent set test (IST), and 10-fold cross-validation test (10-FCVT). The proposed ensemble learning model demonstrates promising performance, achieving 96% accuracy in the independent set test (IST). Statistical measures such as training accuracy, testing accuracy, recall, sensitivity, specificity, Mathew's Correlation Coefficient (MCC), loss, training accuracy, F1 Score, and Cohen's kappa are utilized for comprehensive evaluation.
Collapse
Affiliation(s)
- Asghar Ali Shah
- Center of Excellence in Artificial Intelligence (CoE-AI), Department of Computer Science, Bahria University, Islamabad, 04408, Pakistan
| | - Ali Daud
- Faculty of Resilience, Rabdan Academy, Abu Dhabi, United Arab Emirates.
| | - Amal Bukhari
- Department of Information Systems and Technology, Collage of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
| | - Bader Alshemaimri
- Software Engineering Department, College of Computing and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Muhammad Ahsan
- Department of Computer Science, University of Alabama at Birmingham, 1402 10th Avenue S, Birmingham, AL, 35294, USA
| | - Rehmana Younis
- College of Letters and Sciences, Graduate Student of Robotics Engineering, Columbus State University, Columbus, USA
| |
Collapse
|
4
|
Kourounis G, Elmahmudi AA, Thomson B, Hunter J, Ugail H, Wilson C. Computer image analysis with artificial intelligence: a practical introduction to convolutional neural networks for medical professionals. Postgrad Med J 2023; 99:1287-1294. [PMID: 37794609 PMCID: PMC10658730 DOI: 10.1093/postmj/qgad095] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 09/06/2023] [Accepted: 09/13/2023] [Indexed: 10/06/2023]
Abstract
Artificial intelligence tools, particularly convolutional neural networks (CNNs), are transforming healthcare by enhancing predictive, diagnostic, and decision-making capabilities. This review provides an accessible and practical explanation of CNNs for clinicians and highlights their relevance in medical image analysis. CNNs have shown themselves to be exceptionally useful in computer vision, a field that enables machines to 'see' and interpret visual data. Understanding how these models work can help clinicians leverage their full potential, especially as artificial intelligence continues to evolve and integrate into healthcare. CNNs have already demonstrated their efficacy in diverse medical fields, including radiology, histopathology, and medical photography. In radiology, CNNs have been used to automate the assessment of conditions such as pneumonia, pulmonary embolism, and rectal cancer. In histopathology, CNNs have been used to assess and classify colorectal polyps, gastric epithelial tumours, as well as assist in the assessment of multiple malignancies. In medical photography, CNNs have been used to assess retinal diseases and skin conditions, and to detect gastric and colorectal polyps during endoscopic procedures. In surgical laparoscopy, they may provide intraoperative assistance to surgeons, helping interpret surgical anatomy and demonstrate safe dissection zones. The integration of CNNs into medical image analysis promises to enhance diagnostic accuracy, streamline workflow efficiency, and expand access to expert-level image analysis, contributing to the ultimate goal of delivering further improvements in patient and healthcare outcomes.
Collapse
Affiliation(s)
- Georgios Kourounis
- NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
- Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
| | - Ali Ahmed Elmahmudi
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - Brian Thomson
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - James Hunter
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, OX3 9DU, United Kingdom
| | - Hassan Ugail
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - Colin Wilson
- NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
- Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
| |
Collapse
|
5
|
Jang HJ, Go JH, Kim Y, Lee SH. Deep Learning for the Pathologic Diagnosis of Hepatocellular Carcinoma, Cholangiocarcinoma, and Metastatic Colorectal Cancer. Cancers (Basel) 2023; 15:5389. [PMID: 38001649 PMCID: PMC10670046 DOI: 10.3390/cancers15225389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 11/01/2023] [Accepted: 11/09/2023] [Indexed: 11/26/2023] Open
Abstract
Diagnosing primary liver cancers, particularly hepatocellular carcinoma (HCC) and cholangiocarcinoma (CC), is a challenging and labor-intensive process, even for experts, and secondary liver cancers further complicate the diagnosis. Artificial intelligence (AI) offers promising solutions to these diagnostic challenges by facilitating the histopathological classification of tumors using digital whole slide images (WSIs). This study aimed to develop a deep learning model for distinguishing HCC, CC, and metastatic colorectal cancer (mCRC) using histopathological images and to discuss its clinical implications. The WSIs from HCC, CC, and mCRC were used to train the classifiers. For normal/tumor classification, the areas under the curve (AUCs) were 0.989, 0.988, and 0.991 for HCC, CC, and mCRC, respectively. Using proper tumor tissues, the HCC/other cancer type classifier was trained to effectively distinguish HCC from CC and mCRC, with a concatenated AUC of 0.998. Subsequently, the CC/mCRC classifier differentiated CC from mCRC with a concatenated AUC of 0.995. However, testing on an external dataset revealed that the HCC/other cancer type classifier underperformed with an AUC of 0.745. After combining the original training datasets with external datasets and retraining, the classification drastically improved, all achieving AUCs of 1.000. Although these results are promising and offer crucial insights into liver cancer, further research is required for model refinement and validation.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Department of Physiology, CMC Institute for Basic Medical Science, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea;
| | - Jai-Hyang Go
- Department of Pathology, Dankook University College of Medicine, Cheonan 31116, Republic of Korea;
| | - Younghoon Kim
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea;
| | - Sung Hak Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea;
| |
Collapse
|
6
|
Zhang C, Chen C, Chen C, Lv X. SMiT: symmetric mask transformer for disease severity detection. J Cancer Res Clin Oncol 2023; 149:16075-16086. [PMID: 37698681 DOI: 10.1007/s00432-023-05223-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 07/28/2023] [Indexed: 09/13/2023]
Abstract
PURPOSE The application of deep learning methods to the intelligent diagnosis of diseases has been the focus of intelligent medical research. When dealing with image classification tasks, if the lesion area is small and uneven, the background image involved in the training will affect the ultimate accuracy in determining the extent of the lesion. We did not follow the traditional approach of building an intelligent system to assist physicians in diagnosis from the perspective of CNN models, but instead proposed a pure transformer framework that can be used for diagnostic grading of pathological images. METHODS We propose a Symmetric Mask Pre-Training vision Transformer SMiT model for grading pathological cancer images. SMiT performs a symmetrically identical high probability sparsification of the input image token sequence at the first and last encoder layer positions to pre-train visual transformers, and the parameters of the baseline model are fine-tuned after loading the pre-training weights, allowing the model to concentrate more on extracting detailed features in the lesion region, effectively getting rid of the potential feature dependency problem. RESULTS SMiT achieved 92.8% classification accuracy on 4500 histopathological images of colorectal cancer processed by Gaussian filter denoising. We validated the effectiveness and generalizability of this study's methodology on the publicly available diabetic retinopathy dataset APTOS2019 from Kaggle and achieved quadratic Cohen Kappa, accuracy and F1-score of 91.9%, 86.91% and 72.85%, respectively, which were 1-2% better than previous studies based on CNN models. CONCLUSION SMiT uses a simpler strategy to achieve better results to assist physicians in making accurate clinical decisions, which can be an inspiration for making good use of the visual transformers in the field of medical imaging.
Collapse
Affiliation(s)
- Chengsheng Zhang
- The College of Software, Xinjiang University, Urumqi, 830046, China
| | - Cheng Chen
- The College of Software, Xinjiang University, Urumqi, 830046, China.
| | - Chen Chen
- The College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
| | - Xiaoyi Lv
- The College of Software, Xinjiang University, Urumqi, 830046, China
| |
Collapse
|
7
|
Zhu L, Wang F, Chen X, Dong Q, Xia N, Chen J, Li Z, Zhu C. Machine learning-based radiomics analysis of preoperative functional liver reserve with MRI and CT image. BMC Med Imaging 2023; 23:94. [PMID: 37460944 PMCID: PMC10353100 DOI: 10.1186/s12880-023-01050-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 06/27/2023] [Indexed: 07/20/2023] Open
Abstract
OBJECTIVE The indocyanine green retention rate at 15 min (ICG-R15) is a useful tool to evaluate the functional liver reserve before hepatectomy for liver cancer. Taking ICG-R15 as criteria, we investigated the ability of a machine learning (ML)-based radiomics model produced by Gd-EOB-DTPA-enhanced hepatic magnetic resonance imaging (MRI) or contrast-enhanced computed tomography (CT) image in evaluating functional liver reserve of hepatocellular carcinoma (HCC) patients. METHODS A total of 190 HCC patients with CT, among whom 112 also with MR, were retrospectively enrolled and randomly classified into a training dataset (CT: n = 133, MR: n = 78) and a test dataset (CT: n = 57, MR: n = 34). Then, radiomics features from Gd-EOB-DTPA MRI and CT images were extracted. The features associated with the ICG-R15 classification were selected. Five ML classifiers were used for the ML-model investigation. The accuracy (ACC) and the area under curve (AUC) of receiver operating characteristic (ROC) with 95% confidence intervals (CI) were utilized for ML-model performance evaluation. RESULTS A total of 107 different radiomics features were extracted from MRI and CT, respectively. The features related to ICG-R15 which was classified into 10%, 20% and 30% were selected. In MRI groups, classifier XGBoost performed best with its AUC = 0.917 and ACC = 0.882 when the threshold was set as ICG-R15 = 10%. When ICG-R15 = 20%, classifier Random Forest performed best with AUC = 0.979 and ACC = 0.882. When ICG-R15 = 30%, classifier XGBoost performed best with AUC = 0.961 and ACC = 0.941. For CT groups, the classifier XGBoost performed best when ICG-R15 = 10% with AUC = 0.822 and ACC = 0.842. When ICG-R15 = 20%, classifier SVM performed best with AUC = 0.860 and ACC = 0.842. When ICG-R15 = 30%, classifier XGBoost performed best with AUC = 0.938 and ACC = 0.965. CONCLUSIONS Both the MRI- and CT-based machine learning models are proved to be valuable noninvasive methods for functional liver reserve evaluation.
Collapse
Affiliation(s)
- Ling Zhu
- Shandong Key Laboratory of Digital Medicine and Computer Assisted Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Feifei Wang
- Shandong Key Laboratory of Digital Medicine and Computer Assisted Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Xue Chen
- Shandong Key Laboratory of Digital Medicine and Computer Assisted Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
- Institute for Digital Medicine and Computer-assisted Surgery in Qingdao University, Qingdao University, Qingdao, China
| | - Qian Dong
- Shandong Key Laboratory of Digital Medicine and Computer Assisted Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
- Department of Pediatric Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Nan Xia
- Shandong Key Laboratory of Digital Medicine and Computer Assisted Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China
- Institute for Digital Medicine and Computer-assisted Surgery in Qingdao University, Qingdao University, Qingdao, China
| | - Jingjing Chen
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Zheng Li
- Qingdao Hisense Medical Equipment Co., Ltd, Qingdao, China
| | - Chengzhan Zhu
- Shandong Key Laboratory of Digital Medicine and Computer Assisted Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China.
- Department of Hepatobiliary and Pancreatic Surgery, The Affiliated Hospital of Qingdao University, Qingdao, China.
| |
Collapse
|
8
|
Tan J, Dong Y, Li J. Automated fundus ultrasound image classification based on siamese convolutional neural networks with multi-attention. BMC Med Imaging 2023; 23:89. [PMID: 37415102 PMCID: PMC10324257 DOI: 10.1186/s12880-023-01047-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 06/06/2023] [Indexed: 07/08/2023] Open
Abstract
Fundus ultrasound image classification is a critical issue in the medical field. Vitreous opacity (VO) and posterior vitreous detachment (PVD) are two common eye diseases, Now, the diagnosis of these two diseases mainly relies on manual identification by doctors. This method has the disadvantages of time-consuming and manual investment, so it is very meaningful to use computer technology to assist doctors in diagnosis. This paper is the first to apply the deep learning model to VO and PVD classification tasks. Convolutional neural network (CNN) is widely used in image classification. Traditional CNN requires a large amount of training data to prevent overfitting, and it is difficult to learn the differences between two kinds of images well. In this paper, we propose an end-to-end siamese convolutional neural network with multi-attention (SVK_MA) for automatic classification of VO and PVD fundus ultrasound images. SVK_MA is a siamese-structure network in which each branch is mainly composed of pretrained VGG16 embedded with multiple attention models. Each image first is normalized, then is sent to SVK_MA to extract features from the normalized images, and finally gets the classification result. Our approach has been validated on the dataset provided by the cooperative hospital. The experimental results show that our approach achieves the accuracy of 0.940, precision of 0.941, recall of 0.940, F1 of 0.939 which are respectively increased by 2.5%, 1.9%, 3.4% and 2.5% compared with the second highest model.
Collapse
Affiliation(s)
- Jiachen Tan
- School of Computer Science and Technology, Jiangsu Normal University, Xuzhou , 221116, Jiangsu, China
| | - Yongquan Dong
- School of Computer Science and Technology, Jiangsu Normal University, Xuzhou , 221116, Jiangsu, China.
- Xuzhou Cloud Computing Engineering Technology Research Center, Xuzhou , 221116, Jiangsu, China.
| | - Junchi Li
- Xuzhou No.1 People's Hospital, Xuzhou, 221018, Jiangsu, China
| |
Collapse
|