1
|
Song L, Li C, Tan L, Wang M, Chen X, Ye Q, Li S, Zhang R, Zeng Q, Xie Z, Yang W, Zhao Y. A deep learning model to enhance the classification of primary bone tumors based on incomplete multimodal images in X-ray, CT, and MRI. Cancer Imaging 2024; 24:135. [PMID: 39390604 PMCID: PMC11468403 DOI: 10.1186/s40644-024-00784-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Accepted: 10/03/2024] [Indexed: 10/12/2024] Open
Abstract
BACKGROUND Accurately classifying primary bone tumors is crucial for guiding therapeutic decisions. The National Comprehensive Cancer Network guidelines recommend multimodal images to provide different perspectives for the comprehensive evaluation of primary bone tumors. However, in clinical practice, most patients' medical multimodal images are often incomplete. This study aimed to build a deep learning model using patients' incomplete multimodal images from X-ray, CT, and MRI alongside clinical characteristics to classify primary bone tumors as benign, intermediate, or malignant. METHODS In this retrospective study, a total of 1305 patients with histopathologically confirmed primary bone tumors (internal dataset, n = 1043; external dataset, n = 262) were included from two centers between January 2010 and December 2022. We proposed a Primary Bone Tumor Classification Transformer Network (PBTC-TransNet) fusion model to classify primary bone tumors. Areas under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity were calculated to evaluate the model's classification performance. RESULTS The PBTC-TransNet fusion model achieved satisfactory micro-average AUCs of 0.847 (95% CI: 0.832, 0.862) and 0.782 (95% CI: 0.749, 0.817) on the internal and external test sets. For the classification of benign, intermediate, and malignant primary bone tumors, the model respectively achieved AUCs of 0.827/0.727, 0.740/0.662, and 0.815/0.745 on the internal/external test sets. Furthermore, across all patient subgroups stratified by the distribution of imaging modalities, the PBTC-TransNet fusion model gained micro-average AUCs ranging from 0.700 to 0.909 and 0.640 to 0.847 on the internal and external test sets, respectively. The model showed the highest micro-average AUC of 0.909, accuracy of 84.3%, micro-average sensitivity of 84.3%, and micro-average specificity of 92.1% in those with only X-rays on the internal test set. On the external test set, the PBTC-TransNet fusion model gained the highest micro-average AUC of 0.847 for patients with X-ray + CT. CONCLUSIONS We successfully developed and externally validated the transformer-based PBTC-Transnet fusion model for the effective classification of primary bone tumors. This model, rooted in incomplete multimodal images and clinical characteristics, effectively mirrors real-life clinical scenarios, thus enhancing its strong clinical practicability.
Collapse
Affiliation(s)
- Liwen Song
- Department of Radiology, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Chuanpu Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Lilian Tan
- Department of Radiology, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Menghong Wang
- Department of Radiology, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Xiaqing Chen
- Department of Radiology, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Qiang Ye
- Department of Radiology, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Shisi Li
- Department of Radiology, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Rui Zhang
- Department of Radiology, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Qinghai Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Zhuoyao Xie
- Department of Radiology, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China.
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China.
| | - Yinghua Zhao
- Department of Radiology, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China.
| |
Collapse
|
2
|
Ahmad J, Akram S, Jaffar A, Ali Z, Bhatti SM, Ahmad A, Rehman SU. Deep learning empowered breast cancer diagnosis: Advancements in detection and classification. PLoS One 2024; 19:e0304757. [PMID: 38990817 PMCID: PMC11239011 DOI: 10.1371/journal.pone.0304757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 05/18/2024] [Indexed: 07/13/2024] Open
Abstract
Recent advancements in AI, driven by big data technologies, have reshaped various industries, with a strong focus on data-driven approaches. This has resulted in remarkable progress in fields like computer vision, e-commerce, cybersecurity, and healthcare, primarily fueled by the integration of machine learning and deep learning models. Notably, the intersection of oncology and computer science has given rise to Computer-Aided Diagnosis (CAD) systems, offering vital tools to aid medical professionals in tumor detection, classification, recurrence tracking, and prognosis prediction. Breast cancer, a significant global health concern, is particularly prevalent in Asia due to diverse factors like lifestyle, genetics, environmental exposures, and healthcare accessibility. Early detection through mammography screening is critical, but the accuracy of mammograms can vary due to factors like breast composition and tumor characteristics, leading to potential misdiagnoses. To address this, an innovative CAD system leveraging deep learning and computer vision techniques was introduced. This system enhances breast cancer diagnosis by independently identifying and categorizing breast lesions, segmenting mass lesions, and classifying them based on pathology. Thorough validation using the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) demonstrated the CAD system's exceptional performance, with a 99% success rate in detecting and classifying breast masses. While the accuracy of detection is 98.5%, when segmenting breast masses into separate groups for examination, the method's performance was approximately 95.39%. Upon completing all the analysis, the system's classification phase yielded an overall accuracy of 99.16% for classification. The potential for this integrated framework to outperform current deep learning techniques is proposed, despite potential challenges related to the high number of trainable parameters. Ultimately, this recommended framework offers valuable support to researchers and physicians in breast cancer diagnosis by harnessing cutting-edge AI and image processing technologies, extending recent advances in deep learning to the medical domain.
Collapse
Affiliation(s)
- Jawad Ahmad
- Faculty of Computer Science & Information Technology, The Superior University, Lahore, Pakistan
- Intelligent Data Visual Computing Research (IDVCR), Lahore, Pakistan
| | - Sheeraz Akram
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Arfan Jaffar
- Faculty of Computer Science & Information Technology, The Superior University, Lahore, Pakistan
- Intelligent Data Visual Computing Research (IDVCR), Lahore, Pakistan
| | - Zulfiqar Ali
- School of Computer Science and Electronic Engineering (CSEE), University of Essex, Wivenhoe Park, Colchester, United Kingdom
| | - Sohail Masood Bhatti
- Faculty of Computer Science & Information Technology, The Superior University, Lahore, Pakistan
- Intelligent Data Visual Computing Research (IDVCR), Lahore, Pakistan
| | - Awais Ahmad
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Shafiq Ur Rehman
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| |
Collapse
|
3
|
Wang J, Shao M, Hu H, Xiao W, Cheng G, Yang G, Ji H, Yu S, Wan J, Xie Z, Xu M. Convolutional neural network applied to preoperative venous-phase CT images predicts risk category in patients with gastric gastrointestinal stromal tumors. BMC Cancer 2024; 24:280. [PMID: 38429653 PMCID: PMC10908217 DOI: 10.1186/s12885-024-11962-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 02/05/2024] [Indexed: 03/03/2024] Open
Abstract
OBJECTIVE The risk category of gastric gastrointestinal stromal tumors (GISTs) are closely related to the surgical method, the scope of resection, and the need for preoperative chemotherapy. We aimed to develop and validate convolutional neural network (CNN) models based on preoperative venous-phase CT images to predict the risk category of gastric GISTs. METHOD A total of 425 patients pathologically diagnosed with gastric GISTs at the authors' medical centers between January 2012 and July 2021 were split into a training set (154, 84, and 59 with very low/low, intermediate, and high-risk, respectively) and a validation set (67, 35, and 26, respectively). Three CNN models were constructed by obtaining the upper and lower 1, 4, and 7 layers of the maximum tumour mask slice based on venous-phase CT Images and models of CNN_layer3, CNN_layer9, and CNN_layer15 established, respectively. The area under the receiver operating characteristics curve (AUROC) and the Obuchowski index were calculated to compare the diagnostic performance of the CNN models. RESULTS In the validation set, CNN_layer3, CNN_layer9, and CNN_layer15 had AUROCs of 0.89, 0.90, and 0.90, respectively, for low-risk gastric GISTs; 0.82, 0.83, and 0.83 for intermediate-risk gastric GISTs; and 0.86, 0.86, and 0.85 for high-risk gastric GISTs. In the validation dataset, CNN_layer3 (Obuchowski index, 0.871) provided similar performance than CNN_layer9 and CNN_layer15 (Obuchowski index, 0.875 and 0.873, respectively) in prediction of the gastric GIST risk category (All P >.05). CONCLUSIONS The CNN based on preoperative venous-phase CT images showed good performance for predicting the risk category of gastric GISTs.
Collapse
Affiliation(s)
- Jian Wang
- Department of Radiology, Tongde Hospital of Zhejiang Province, Hangzhou, Zhejiang, China
- Department of radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, Zhejiang, China
| | - Meihua Shao
- Department of Radiology, Tongde Hospital of Zhejiang Province, Hangzhou, Zhejiang, China
| | - Hongjie Hu
- Department of Radiology, The Sir Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Wenbo Xiao
- Department of radiology,The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | | | - Guangzhao Yang
- Department of Radiology, Tongde Hospital of Zhejiang Province, Hangzhou, Zhejiang, China
| | - Hongli Ji
- Jianpei Technology, Hangzhou, Zhejiang, China
| | - Susu Yu
- Department of radiology,The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Jie Wan
- Jianpei Technology, Hangzhou, Zhejiang, China
| | - Zongyu Xie
- Department of Radiology, The First Affliated Hospital of Bengbu Medical University, Bengbu, Anhui, China
| | - Maosheng Xu
- Department of radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, Zhejiang, China.
| |
Collapse
|
4
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
5
|
Yang J, Hussein Kadir D. Data mining techniques in breast cancer diagnosis at the cellular-molecular level. J Cancer Res Clin Oncol 2023; 149:12605-12620. [PMID: 37442866 DOI: 10.1007/s00432-023-05090-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 06/30/2023] [Indexed: 07/15/2023]
Abstract
INTRODUCTION Studies in the field of better diagnosis of breast cancer using machine learning and data mining techniques have always been promising. A new diagnostic method can detect the characteristics of breast cancer in the early stages and help in better treatment. The aim of this study is to provide a method for early detection of breast cancer by reducing human errors based on data mining techniques in medicine using accurate and rapid screening. METHODOLOGY The proposed method includes data pre-processing and image quality improvement in the first step. The second step consists of separating cancer cells from healthy breast tissue and removing outliers using image segmentation. Finally, a classification model is configured by combining deep neural networks in the third phase. The proposed ensemble classification model uses several effective features extracted from images and is based on majority vote. This model can be used as a screening system to diagnose the grade of invasive ductal carcinoma of the breast. RESULTS Evaluations have been done using two histopathological microscopic datasets including patients with invasive ductal carcinoma of the breast. With extracting high-level features with average accuracies of 92.65% and 93.34% in these two datasets, the proposed method has succeeded in quickly diagnosing and classifying breast cancer with high performance. CONCLUSION By combining deep neural networks and extracting features affecting breast cancer, the ability to diagnose with the highest accuracy is provided, and this is a step toward helping specialists and increasing the chances of patients' survival.
Collapse
Affiliation(s)
- Jian Yang
- General Office of China Science and Technology Development Center for Chinese Medicine, Chaoyang District, Beijing, 100020, China.
| | - Dler Hussein Kadir
- Department of Statistics and Informatics, College of Administration and Economics, Salahaddin University, Erbil, Iraq
- Department of Business Administration, Cihan University-Erbil, Erbil, Iraq
| |
Collapse
|
6
|
Rahaman MM, Millar EKA, Meijering E. Breast cancer histopathology image-based gene expression prediction using spatial transcriptomics data and deep learning. Sci Rep 2023; 13:13604. [PMID: 37604916 PMCID: PMC10442349 DOI: 10.1038/s41598-023-40219-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 08/07/2023] [Indexed: 08/23/2023] Open
Abstract
Tumour heterogeneity in breast cancer poses challenges in predicting outcome and response to therapy. Spatial transcriptomics technologies may address these challenges, as they provide a wealth of information about gene expression at the cell level, but they are expensive, hindering their use in large-scale clinical oncology studies. Predicting gene expression from hematoxylin and eosin stained histology images provides a more affordable alternative for such studies. Here we present BrST-Net, a deep learning framework for predicting gene expression from histopathology images using spatial transcriptomics data. Using this framework, we trained and evaluated four distinct state-of-the-art deep learning architectures, which include ResNet101, Inception-v3, EfficientNet (with six different variants), and vision transformer (with two different variants), all without utilizing pretrained weights for the prediction of 250 genes. To enhance the generalisation performance of the main network, we introduce an auxiliary network into the framework. Our methodology outperforms previous studies, with 237 genes identified with positive correlation, including 24 genes with a median correlation coefficient greater than 0.50. This is a notable improvement over previous studies, which could predict only 102 genes with positive correlation, with the highest correlation values ranging from 0.29 to 0.34.
Collapse
Affiliation(s)
- Md Mamunur Rahaman
- School of Computer Science and Engineering, University of New South Wales, Kensington, Sydney, NSW 2052, Australia
| | - Ewan K A Millar
- Department of Anatomical Pathology, NSW Health Pathology, St. George Hospital, Kogarah, Sydney, NSW 2217, Australia
- St. George and Sutherland Clinical School, University of New South Wales, Kensington, Sydney, NSW 2052, Australia
- Faculty of Medicine & Health Sciences, Western Sydney University, Campbelltown, Sydney, NSW 2560, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Kensington, Sydney, NSW 2052, Australia.
| |
Collapse
|
7
|
Kaba Ş, Haci H, Isin A, Ilhan A, Conkbayir C. The Application of Deep Learning for the Segmentation and Classification of Coronary Arteries. Diagnostics (Basel) 2023; 13:2274. [PMID: 37443668 DOI: 10.3390/diagnostics13132274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 06/30/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
In recent years, the prevalence of coronary artery disease (CAD) has become one of the leading causes of death around the world. Accurate stenosis detection of coronary arteries is crucial for timely treatment. Cardiologists use visual estimations when reading coronary angiography images to diagnose stenosis. As a result, they face various challenges which include high workloads, long processing times and human error. Computer-aided segmentation and classification of coronary arteries, as to whether stenosis is present or not, significantly reduces the workload of cardiologists and human errors caused by manual processes. Moreover, deep learning techniques have been shown to aid medical experts in diagnosing diseases using biomedical imaging. Thus, this study proposes the use of automatic segmentation of coronary arteries using U-Net, ResUNet-a, UNet++, models and classification using DenseNet201, EfficientNet-B0, Mobilenet-v2, ResNet101 and Xception models. In the case of segmentation, the comparative analysis of the three models has shown that U-Net achieved the highest score with a 0.8467 Dice score and 0.7454 Jaccard Index in comparison with UNet++ and ResUnet-a. Evaluation of the classification model's performances has shown that DenseNet201 performed better than other pretrained models with 0.9000 accuracy, 0.9833 specificity, 0.9556 PPV, 0.7746 Cohen's Kappa and 0.9694 Area Under the Curve (AUC).
Collapse
Affiliation(s)
- Şerife Kaba
- Department of Biomedical Engineering, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Huseyin Haci
- Department of Electrical-Electronic Engineering, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Ali Isin
- Department of Biomedical Engineering, Cyprus International University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Ahmet Ilhan
- Department of Computer Engineering, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Cenk Conkbayir
- Department of Cardiology, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| |
Collapse
|
8
|
Lee HW, Kim E, Na I, Kim CK, Seo SI, Park H. Novel Multiparametric Magnetic Resonance Imaging-Based Deep Learning and Clinical Parameter Integration for the Prediction of Long-Term Biochemical Recurrence-Free Survival in Prostate Cancer after Radical Prostatectomy. Cancers (Basel) 2023; 15:3416. [PMID: 37444526 DOI: 10.3390/cancers15133416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 06/19/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023] Open
Abstract
Radical prostatectomy (RP) is the main treatment of prostate cancer (PCa). Biochemical recurrence (BCR) following RP remains the first sign of aggressive disease; hence, better assessment of potential long-term post-RP BCR-free survival is crucial. Our study aimed to evaluate a combined clinical-deep learning (DL) model using multiparametric magnetic resonance imaging (mpMRI) for predicting long-term post-RP BCR-free survival in PCa. A total of 437 patients with PCa who underwent mpMRI followed by RP between 2008 and 2009 were enrolled; radiomics features were extracted from T2-weighted imaging, apparent diffusion coefficient maps, and contrast-enhanced sequences by manually delineating the index tumors. Deep features from the same set of imaging were extracted using a deep neural network based on pretrained EfficentNet-B0. Here, we present a clinical model (six clinical variables), radiomics model, DL model (DLM-Deep feature), combined clinical-radiomics model (CRM-Multi), and combined clinical-DL model (CDLM-Deep feature) that were built using Cox models regularized with the least absolute shrinkage and selection operator. We compared their prognostic performances using stratified fivefold cross-validation. In a median follow-up of 61 months, 110/437 patients experienced BCR. CDLM-Deep feature achieved the best performance (hazard ratio [HR] = 7.72), followed by DLM-Deep feature (HR = 4.37) or RM-Multi (HR = 2.67). CRM-Multi performed moderately. Our results confirm the superior performance of our mpMRI-derived DL algorithm over conventional radiomics.
Collapse
Affiliation(s)
- Hye Won Lee
- Samsung Medical Center, Department of Urology, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea
| | - Eunjin Kim
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Inye Na
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea
| | - Seong Il Seo
- Samsung Medical Center, Department of Urology, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea
| | - Hyunjin Park
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon 16419, Republic of Korea
| |
Collapse
|
9
|
Al-Jabbar M, Alshahrani M, Senan EM, Ahmed IA. Analyzing Histological Images Using Hybrid Techniques for Early Detection of Multi-Class Breast Cancer Based on Fusion Features of CNN and Handcrafted. Diagnostics (Basel) 2023; 13:diagnostics13101753. [PMID: 37238243 DOI: 10.3390/diagnostics13101753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 05/09/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is the second most common type of cancer among women, and it can threaten women's lives if it is not diagnosed early. There are many methods for detecting breast cancer, but they cannot distinguish between benign and malignant tumors. Therefore, a biopsy taken from the patient's abnormal tissue is an effective way to distinguish between malignant and benign breast cancer tumors. There are many challenges facing pathologists and experts in diagnosing breast cancer, including the addition of some medical fluids of various colors, the direction of the sample, the small number of doctors and their differing opinions. Thus, artificial intelligence techniques solve these challenges and help clinicians resolve their diagnostic differences. In this study, three techniques, each with three systems, were developed to diagnose multi and binary classes of breast cancer datasets and distinguish between benign and malignant types with 40× and 400× factors. The first technique for diagnosing a breast cancer dataset is using an artificial neural network (ANN) with selected features from VGG-19 and ResNet-18. The second technique for diagnosing breast cancer dataset is by ANN with combined features for VGG-19 and ResNet-18 before and after principal component analysis (PCA). The third technique for analyzing breast cancer dataset is by ANN with hybrid features. The hybrid features are a hybrid between VGG-19 and handcrafted; and a hybrid between ResNet-18 and handcrafted. The handcrafted features are mixed features extracted using Fuzzy color histogram (FCH), local binary pattern (LBP), discrete wavelet transform (DWT) and gray level co-occurrence matrix (GLCM) methods. With the multi classes data set, ANN with the hybrid features of the VGG-19 and handcrafted reached a precision of 95.86%, an accuracy of 97.3%, sensitivity of 96.75%, AUC of 99.37%, and specificity of 99.81% with images at magnification factor 400×. Whereas with the binary classes data set, ANN with the hybrid features of the VGG-19 and handcrafted reached a precision of 99.74%, an accuracy of 99.7%, sensitivity of 100%, AUC of 99.85%, and specificity of 100% with images at a magnification factor 400×.
Collapse
Affiliation(s)
- Mohammed Al-Jabbar
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Mohammed Alshahrani
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | |
Collapse
|
10
|
Chen X, Pu X, Chen Z, Li L, Zhao KN, Liu H, Zhu H. Application of EfficientNet-B0 and GRU-based deep learning on classifying the colposcopy diagnosis of precancerous cervical lesions. Cancer Med 2023; 12:8690-8699. [PMID: 36629131 PMCID: PMC10134359 DOI: 10.1002/cam4.5581] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 11/23/2022] [Accepted: 12/17/2022] [Indexed: 01/12/2023] Open
Abstract
BACKGROUND Colposcopy is indispensable for the diagnosis of cervical lesions. However, its diagnosis accuracy for high-grade squamous intraepithelial lesion (HSIL) is at about 50%, and the accuracy is largely dependent on the skill and experience of colposcopists. The advancement in computational power made it possible for the application of artificial intelligence (AI) to clinical problems. Here, we explored the feasibility and accuracy of the application of AI on precancerous and cancerous cervical colposcopic image recognition and classification. METHODS The images were collected from 6002 colposcopy examinations of normal control, low-grade squamous intraepithelial lesion (LSIL), and HSIL. For each patient, the original, Schiller test, and acetic-acid images were all collected. We built a new neural network classification model based on the hybrid algorithm. EfficientNet-b0 was used as the backbone network for the image feature extraction, and GRU(Gate Recurrent Unit)was applied for feature fusion of the three modes examinations (original, acetic acid, and Schiller test). RESULTS The connected network classifier achieved an accuracy of 90.61% in distinguishing HSIL from normal and LSIL. Furthermore, the model was applied to "Trichotomy", which reached an accuracy of 91.18% in distinguishing the HSIL, LSIL and normal control at the same time. CONCLUSION Our results revealed that as shown by the high accuracy of AI in the classification of colposcopic images, AI exhibited great potential to be an effective tool for the accurate diagnosis of cervical disease and for early therapeutic intervention in cervical precancer.
Collapse
Affiliation(s)
- Xiaoyue Chen
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xiaowen Pu
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Zhirou Chen
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Lanzhen Li
- Department of Automation, Shanghai Jiao Tong University, Shanghai, China.,Ningbo Artificial Intelligent Institute, Shanghai Jiao Tong University, Ningbo, China
| | - Kong-Nan Zhao
- School of Basic Medical Science, Wenzhou Medical University, Wenzhou, China.,Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, St Lucia, Queensland, Australia
| | - Haichun Liu
- Department of Automation, Shanghai Jiao Tong University, Shanghai, China.,Ningbo Artificial Intelligent Institute, Shanghai Jiao Tong University, Ningbo, China
| | - Haiyan Zhu
- Department of Gynecology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
11
|
Deep learning for preoperative prediction of the EGFR mutation and subtypes based on the MRI image of spinal metastasis from primary NSCLC. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
12
|
Liu Y, Tong Y, Wan Y, Xia Z, Yao G, Shang X, Huang Y, Chen L, Chen DQ, Liu B. Identification and diagnosis of mammographic malignant architectural distortion using a deep learning based mask regional convolutional neural network. Front Oncol 2023; 13:1119743. [PMID: 37035200 PMCID: PMC10075355 DOI: 10.3389/fonc.2023.1119743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 02/27/2023] [Indexed: 04/11/2023] Open
Abstract
Background Architectural distortion (AD) is a common imaging manifestation of breast cancer, but is also seen in benign lesions. This study aimed to construct deep learning models using mask regional convolutional neural network (Mask-RCNN) for AD identification in full-field digital mammography (FFDM) and evaluate the performance of models for malignant AD diagnosis. Methods This retrospective diagnostic study was conducted at the Second Affiliated Hospital of Guangzhou University of Chinese Medicine between January 2011 and December 2020. Patients with AD in the breast in FFDM were included. Machine learning models for AD identification were developed using the Mask RCNN method. Receiver operating characteristics (ROC) curves, their areas under the curve (AUCs), and recall/sensitivity were used to evaluate the models. Models with the highest AUCs were selected for malignant AD diagnosis. Results A total of 349 AD patients (190 with malignant AD) were enrolled. EfficientNetV2, EfficientNetV1, ResNext, and ResNet were developed for AD identification, with AUCs of 0.89, 0.87, 0.81 and 0.79. The AUC of EfficientNetV2 was significantly higher than EfficientNetV1 (0.89 vs. 0.78, P=0.001) for malignant AD diagnosis, and the recall/sensitivity of the EfficientNetV2 model was 0.93. Conclusion The Mask-RCNN-based EfficientNetV2 model has a good diagnostic value for malignant AD.
Collapse
Affiliation(s)
- Yuanyuan Liu
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yunfei Tong
- Department of Engineering, Shanghai Yanghe Huajian Artificial Intelligence Technology Co., Ltd, Shanghai, China
| | - Yun Wan
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Ziqiang Xia
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Guoyan Yao
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaojing Shang
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yan Huang
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Lijun Chen
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Daniel Q. Chen
- Artificial Intelligence (AI), Research Lab, Boston Meditech Group, Burlington, MA, United States
- *Correspondence: Bo Liu, ; Daniel Q. Chen,
| | - Bo Liu
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
- *Correspondence: Bo Liu, ; Daniel Q. Chen,
| |
Collapse
|
13
|
Das D, Biswas SK, Bandyopadhyay S. Detection of Diabetic Retinopathy using Convolutional Neural Networks for Feature Extraction and Classification (DRFEC). MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:1-59. [PMID: 36467440 PMCID: PMC9708148 DOI: 10.1007/s11042-022-14165-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 06/14/2022] [Accepted: 10/27/2022] [Indexed: 06/17/2023]
Abstract
Diabetic Retinopathy (DR) is caused as a result of Diabetes Mellitus which causes development of various retinal abrasions in the human retina. These lesions cause hindrance in vision and in severe cases, DR can lead to blindness. DR is observed amongst 80% of patients who have been diagnosed from prolonged diabetes for a period of 10-15 years. The manual process of periodic DR diagnosis and detection for necessary treatment, is time consuming and unreliable due to unavailability of resources and expert opinion. Therefore, computerized diagnostic systems which use Deep Learning (DL) Convolutional Neural Network (CNN) architectures, are proposed to learn DR patterns from fundus images and identify the severity of the disease. This paper proposes a comprehensive model using 26 state-of-the-art DL networks to assess and evaluate their performance, and which contribute for deep feature extraction and image classification of DR fundus images. In the proposed model, ResNet50 has shown highest overfitting in comparison to Inception V3, which has shown lowest overfitting when trained using the Kaggle's EyePACS fundus image dataset. EfficientNetB4 is the most optimal, efficient and reliable DL algorithm in detection of DR, followed by InceptionResNetV2, NasNetLarge and DenseNet169. EfficientNetB4 has achieved a training accuracy of 99.37% and the highest validation accuracy of 79.11%. DenseNet201 has achieved the highest training accuracy of 99.58% and a validation accuracy of 76.80% which is less than the top-4 best performing models.
Collapse
Affiliation(s)
- Dolly Das
- Department of Computer Science and Engineering, National Institute of Technology Silchar, Cachar, Silchar, Assam 788010 India
| | - Saroj Kumar Biswas
- Department of Computer Science and Engineering, National Institute of Technology Silchar, Cachar, Silchar, Assam 788010 India
| | - Sivaji Bandyopadhyay
- Department of Computer Science and Engineering, National Institute of Technology Silchar, Cachar, Silchar, Assam 788010 India
| |
Collapse
|
14
|
Chi Z, Xu Q, Ai N, Ge W. Design and Implementation of an Automatic Batch Microinjection System for Zebrafish Larvae. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3143286] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
Shinohara I, Inui A, Mifune Y, Nishimoto H, Yamaura K, Mukohara S, Yoshikawa T, Kato T, Furukawa T, Hoshino Y, Matsushita T, Kuroda R. Diagnosis of Cubital Tunnel Syndrome Using Deep Learning on Ultrasonographic Images. Diagnostics (Basel) 2022; 12:diagnostics12030632. [PMID: 35328185 PMCID: PMC8947597 DOI: 10.3390/diagnostics12030632] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/28/2022] [Accepted: 03/02/2022] [Indexed: 02/04/2023] Open
Abstract
Although electromyography is the routine diagnostic method for cubital tunnel syndrome (CuTS), imaging diagnosis by measuring cross-sectional area (CSA) with ultrasonography (US) has also been attempted in recent years. In this study, deep learning (DL), an artificial intelligence (AI) method, was used on US images, and its diagnostic performance for detecting CuTS was investigated. Elbow images of 30 healthy volunteers and 30 patients diagnosed with CuTS were used. Three thousand US images were prepared per each group to visualize the short axis of the ulnar nerve. Transfer learning was performed on 5000 randomly selected training images using three pre-trained models, and the remaining images were used for testing. The model was evaluated by analyzing a confusion matrix and the area under the receiver operating characteristic curve. Occlusion sensitivity and locally interpretable model-agnostic explanations were used to visualize the features deemed important by the AI. The highest score had an accuracy of 0.90, a precision of 0.86, a recall of 1.00, and an F-measure of 0.92. Visualization results show that the DL models focused on the epineurium of the ulnar nerve and the surrounding soft tissue. The proposed technique enables the accurate prediction of CuTS without the need to measure CSA.
Collapse
Affiliation(s)
| | - Atsuyuki Inui
- Correspondence: ; Tel.: +81-78-382-5111; Fax: +81-78-351-6944
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
16
|
Popescu D, El-Khatib M, El-Khatib H, Ichim L. New Trends in Melanoma Detection Using Neural Networks: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:496. [PMID: 35062458 PMCID: PMC8778535 DOI: 10.3390/s22020496] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 12/28/2021] [Accepted: 01/05/2022] [Indexed: 12/29/2022]
Abstract
Due to its increasing incidence, skin cancer, and especially melanoma, is a serious health disease today. The high mortality rate associated with melanoma makes it necessary to detect the early stages to be treated urgently and properly. This is the reason why many researchers in this domain wanted to obtain accurate computer-aided diagnosis systems to assist in the early detection and diagnosis of such diseases. The paper presents a systematic review of recent advances in an area of increased interest for cancer prediction, with a focus on a comparative perspective of melanoma detection using artificial intelligence, especially neural network-based systems. Such structures can be considered intelligent support systems for dermatologists. Theoretical and applied contributions were investigated in the new development trends of multiple neural network architecture, based on decision fusion. The most representative articles covering the area of melanoma detection based on neural networks, published in journals and impact conferences, were investigated between 2015 and 2021, focusing on the interval 2018-2021 as new trends. Additionally presented are the main databases and trends in their use in teaching neural networks to detect melanomas. Finally, a research agenda was highlighted to advance the field towards the new trends.
Collapse
Affiliation(s)
- Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (M.E.-K.); (H.E.-K.); (L.I.)
| | | | | | | |
Collapse
|
17
|
Ragab M, Albukhari A. Automated Artificial Intelligence Empowered Colorectal Cancer Detection and Classification Model. COMPUTERS, MATERIALS & CONTINUA 2022; 72:5577-5591. [DOI: 10.32604/cmc.2022.026715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 03/18/2022] [Indexed: 10/28/2024]
|
18
|
Melanoma Recognition by Fusing Convolutional Blocks and Dynamic Routing between Capsules. Cancers (Basel) 2021; 13:cancers13194974. [PMID: 34638456 PMCID: PMC8508435 DOI: 10.3390/cancers13194974] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 09/28/2021] [Accepted: 09/29/2021] [Indexed: 11/17/2022] Open
Abstract
Simple Summary The early treatment of skin cancer can effectively reduce mortality rates. Recently, automatic melanoma diagnosis from skin images has gained attention, which was mainly encouraged by the well-known challenge developed by the International Skin Imaging Collaboration project. The majority of contestant submitted Convolutional Neural Network based solutions. However, this type of model presents disadvantages. As a consequence, Dynamic Routing between Capsules has been proposed to overcome such limitations. The aim of our proposal was to assess the advantages of combining both architectures. An extensive experimental study showed the proposal significantly outperformed state-of-the-art models, achieving 166% higher predictive performance compared to ResNet in non-dermoscopic images. In addition, the pixels activated during prediction were shown, which allows to assess the rationale to give such a conclusion. Finally, more research should be conducted in order to demonstrate the potential of this neural network architecture in other areas. Abstract Skin cancer is one of the most common types of cancers in the world, with melanoma being the most lethal form. Automatic melanoma diagnosis from skin images has recently gained attention within the machine learning community, due to the complexity involved. In the past few years, convolutional neural network models have been commonly used to approach this issue. This type of model, however, presents disadvantages that sometimes hamper its application in real-world situations, e.g., the construction of transformation-invariant models and their inability to consider spatial hierarchies between entities within an image. Recently, Dynamic Routing between Capsules architecture (CapsNet) has been proposed to overcome such limitations. This work is aimed at proposing a new architecture which combines convolutional blocks with a customized CapsNet architecture, allowing for the extraction of richer abstract features. This architecture uses high-quality 299×299×3 skin lesion images, and a hyper-tuning of the main parameters is performed in order to ensure effective learning under limited training data. An extensive experimental study on eleven image datasets was conducted where the proposal significantly outperformed several state-of-the-art models. Finally, predictions made by the model were validated through the application of two modern model-agnostic interpretation tools.
Collapse
|
19
|
Kriegsmann M, Kriegsmann K, Steinbuss G, Zgorzelski C, Kraft A, Gaida MM. Deep Learning in Pancreatic Tissue: Identification of Anatomical Structures, Pancreatic Intraepithelial Neoplasia, and Ductal Adenocarcinoma. Int J Mol Sci 2021; 22:5385. [PMID: 34065423 PMCID: PMC8160892 DOI: 10.3390/ijms22105385] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 05/18/2021] [Accepted: 05/19/2021] [Indexed: 01/16/2023] Open
Abstract
Identification of pancreatic ductal adenocarcinoma (PDAC) and precursor lesions in histological tissue slides can be challenging and elaborate, especially due to tumor heterogeneity. Thus, supportive tools for the identification of anatomical and pathological tissue structures are desired. Deep learning methods recently emerged, which classify histological structures into image categories with high accuracy. However, to date, only a limited number of classes and patients have been included in histopathological studies. In this study, scanned histopathological tissue slides from tissue microarrays of PDAC patients (n = 201, image patches n = 81.165) were extracted and assigned to a training, validation, and test set. With these patches, we implemented a convolutional neuronal network, established quality control measures and a method to interpret the model, and implemented a workflow for whole tissue slides. An optimized EfficientNet algorithm achieved high accuracies that allowed automatically localizing and quantifying tissue categories including pancreatic intraepithelial neoplasia and PDAC in whole tissue slides. SmoothGrad heatmaps allowed explaining image classification results. This is the first study that utilizes deep learning for automatic identification of different anatomical tissue structures and diseases on histopathological images of pancreatic tissue specimens. The proposed approach is a valuable tool to support routine diagnostic review and pancreatic cancer research.
Collapse
Affiliation(s)
- Mark Kriegsmann
- Institute of Pathology, University of Heidelberg, 69120 Heidelberg, Germany;
| | - Katharina Kriegsmann
- Department of Hematology, Oncology and Rheumatology, University of Heidelberg, 69120 Heidelberg, Germany; (K.K.); (G.S.)
| | - Georg Steinbuss
- Department of Hematology, Oncology and Rheumatology, University of Heidelberg, 69120 Heidelberg, Germany; (K.K.); (G.S.)
| | | | - Anne Kraft
- Institute of Pathology, University Medical Center Mainz, JGU-Mainz, 55131 Mainz, Germany;
| | - Matthias M. Gaida
- Institute of Pathology, University Medical Center Mainz, JGU-Mainz, 55131 Mainz, Germany;
- Research Center for Immunotherapy, University Medical Center Mainz, JGU-Mainz, 55131 Mainz, Germany
- Joint Unit Immunopathology, Institute of Pathology, University Medical Center, JGU-Mainz and TRON, Translational Oncology at the University Medical Center, JGU-Mainz, 55131 Mainz, Germany
| |
Collapse
|
20
|
Deep Learning for the Classification of Non-Hodgkin Lymphoma on Histopathological Images. Cancers (Basel) 2021; 13:cancers13102419. [PMID: 34067726 PMCID: PMC8156071 DOI: 10.3390/cancers13102419] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 05/12/2021] [Accepted: 05/13/2021] [Indexed: 01/16/2023] Open
Abstract
Simple Summary Histopathological examination of lymph node (LN) specimens allows the detection of hematological diseases. The identification and the classification of lymphoma, a blood cancer with a manifestation in LNs, are difficult and require many years of training, as well as additional expensive investigations. Today, artificial intelligence (AI) can be used to support the pathologist in identifying abnormalities in LN specimens. In this article, we trained and optimized an AI algorithm to automatically detect two common lymphoma subtypes that require different therapies using normal LN parenchyma as a control. The balanced accuracy in an independent test cohort was above 95%, which means that the vast majority of cases were classified correctly and only a few cases were misclassified. We applied specific methods to explain which parts of the image were important for the AI algorithm and to ensure a reliable result. Our study shows that classifications of lymphoma subtypes is possible with high accuracy. We think that routine histopathological applications for AI should be pursued. Abstract The diagnosis and the subtyping of non-Hodgkin lymphoma (NHL) are challenging and require expert knowledge, great experience, thorough morphological analysis, and often additional expensive immunohistological and molecular methods. As these requirements are not always available, supplemental methods supporting morphological-based decision making and potentially entity subtyping are required. Deep learning methods have been shown to classify histopathological images with high accuracy, but data on NHL subtyping are limited. After annotation of histopathological whole-slide images and image patch extraction, we trained and optimized an EfficientNet convolutional neuronal network algorithm on 84,139 image patches from 629 patients and evaluated its potential to classify tumor-free reference lymph nodes, nodal small lymphocytic lymphoma/chronic lymphocytic leukemia, and nodal diffuse large B-cell lymphoma. The optimized algorithm achieved an accuracy of 95.56% on an independent test set including 16,960 image patches from 125 patients after the application of quality controls. Automatic classification of NHL is possible with high accuracy using deep learning on histopathological images and routine diagnostic applications should be pursued.
Collapse
|
21
|
Munien C, Viriri S. Classification of Hematoxylin and Eosin-Stained Breast Cancer Histology Microscopy Images Using Transfer Learning with EfficientNets. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5580914. [PMID: 33897774 PMCID: PMC8052174 DOI: 10.1155/2021/5580914] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 03/15/2021] [Accepted: 03/29/2021] [Indexed: 12/19/2022]
Abstract
Breast cancer is a fatal disease and is a leading cause of death in women worldwide. The process of diagnosis based on biopsy tissue is nontrivial, time-consuming, and prone to human error, and there may be conflict about the final diagnosis due to interobserver variability. Computer-aided diagnosis systems have been designed and implemented to combat these issues. These systems contribute significantly to increasing the efficiency and accuracy and reducing the cost of diagnosis. Moreover, these systems must perform better so that their determined diagnosis can be more reliable. This research investigates the application of the EfficientNet architecture for the classification of hematoxylin and eosin-stained breast cancer histology images provided by the ICIAR2018 dataset. Specifically, seven EfficientNets were fine-tuned and evaluated on their ability to classify images into four classes: normal, benign, in situ carcinoma, and invasive carcinoma. Moreover, two standard stain normalization techniques, Reinhard and Macenko, were observed to measure the impact of stain normalization on performance. The outcome of this approach reveals that the EfficientNet-B2 model yielded an accuracy and sensitivity of 98.33% using Reinhard stain normalization method on the training images and an accuracy and sensitivity of 96.67% using the Macenko stain normalization method. These satisfactory results indicate that transferring generic features from natural images to medical images through fine-tuning on EfficientNets can achieve satisfactory results.
Collapse
Affiliation(s)
- Chanaleä Munien
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 217013433, South Africa
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 217013433, South Africa
| |
Collapse
|
22
|
Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. JOURNAL OF BIG DATA 2021; 8:53. [PMID: 33816053 PMCID: PMC8010506 DOI: 10.1186/s40537-021-00444-8] [Citation(s) in RCA: 792] [Impact Index Per Article: 264.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 03/22/2021] [Indexed: 05/04/2023]
Abstract
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad, 10001 Iraq
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, 10001 Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar, 64005 Iraq
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Laith Farhan
- School of Engineering, Manchester Metropolitan University, Manchester, M1 5GD UK
| |
Collapse
|
23
|
Alzubaidi L, Al-Amidie M, Al-Asadi A, Humaidi AJ, Al-Shamma O, Fadhel MA, Zhang J, Santamaría J, Duan Y. Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data. Cancers (Basel) 2021; 13:1590. [PMID: 33808207 PMCID: PMC8036379 DOI: 10.3390/cancers13071590] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/24/2021] [Accepted: 03/27/2021] [Indexed: 12/27/2022] Open
Abstract
Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes-either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia;
- AlNidhal Campus, University of Information Technology & Communications, Baghdad 10001, Iraq;
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| | - Ahmed Al-Asadi
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad 10001, Iraq;
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad 10001, Iraq;
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar 64005, Iraq;
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia;
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain;
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| |
Collapse
|