1
|
Kanda T, Wakiya T, Ishido K, Kimura N, Nagase H, Yoshida E, Nakagawa J, Matsuzaka M, Niioka T, Sasaki Y, Hakamada K. Noninvasive Computed Tomography-Based Deep Learning Model Predicts In Vitro Chemosensitivity Assay Results in Pancreatic Cancer. Pancreas 2024; 53:e55-e61. [PMID: 38019604 DOI: 10.1097/mpa.0000000000002270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/01/2023]
Abstract
OBJECTIVES We aimed to predict in vitro chemosensitivity assay results from computed tomography (CT) images by applying deep learning (DL) to optimize chemotherapy for pancreatic ductal adenocarcinoma (PDAC). MATERIALS AND METHODS Preoperative enhanced abdominal CT images and the histoculture drug response assay (HDRA) results were collected from 33 PDAC patients undergoing surgery. Deep learning was performed using CT images of both the HDRA-positive and HDRA-negative groups. We trimmed small patches from the entire tumor area. We established various prediction labels for HDRA results with 5-fluorouracil (FU), gemcitabine (GEM), and paclitaxel (PTX). We built a predictive model using a residual convolutional neural network and used 3-fold cross-validation. RESULTS Of the 33 patients, effective response to FU, GEM, and PTX by HDRA was observed in 19 (57.6%), 11 (33.3%), and 23 (88.5%) patients, respectively. The average accuracy and the area under the receiver operating characteristic curve (AUC) of the model for predicting the effective response to FU were 93.4% and 0.979, respectively. In the prediction of GEM, the models demonstrated high accuracy (92.8%) and AUC (0.969). Likewise, the model for predicting response to PTX had a high performance (accuracy, 95.9%; AUC, 0.979). CONCLUSIONS Our CT patch-based DL model exhibited high predictive performance in projecting HDRA results. Our study suggests that the DL approach could possibly provide a noninvasive means for the optimization of chemotherapy.
Collapse
Affiliation(s)
- Taishu Kanda
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | - Taiichi Wakiya
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | - Keinosuke Ishido
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | - Norihisa Kimura
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | - Hayato Nagase
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | - Eri Yoshida
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | | | | | | | - Yoshihiro Sasaki
- Medical Informatics, Hirosaki University Hospital, Hirosaki, Japan
| | - Kenichi Hakamada
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| |
Collapse
|
2
|
Klang E, Sourosh A, Nadkarni GN, Sharif K, Lahat A. Deep Learning and Gastric Cancer: Systematic Review of AI-Assisted Endoscopy. Diagnostics (Basel) 2023; 13:3613. [PMID: 38132197 PMCID: PMC10742887 DOI: 10.3390/diagnostics13243613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/23/2023] [Accepted: 12/02/2023] [Indexed: 12/23/2023] Open
Abstract
BACKGROUND Gastric cancer (GC), a significant health burden worldwide, is typically diagnosed in the advanced stages due to its non-specific symptoms and complex morphological features. Deep learning (DL) has shown potential for improving and standardizing early GC detection. This systematic review aims to evaluate the current status of DL in pre-malignant, early-stage, and gastric neoplasia analysis. METHODS A comprehensive literature search was conducted in PubMed/MEDLINE for original studies implementing DL algorithms for gastric neoplasia detection using endoscopic images. We adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The focus was on studies providing quantitative diagnostic performance measures and those comparing AI performance with human endoscopists. RESULTS Our review encompasses 42 studies that utilize a variety of DL techniques. The findings demonstrate the utility of DL in GC classification, detection, tumor invasion depth assessment, cancer margin delineation, lesion segmentation, and detection of early-stage and pre-malignant lesions. Notably, DL models frequently matched or outperformed human endoscopists in diagnostic accuracy. However, heterogeneity in DL algorithms, imaging techniques, and study designs precluded a definitive conclusion about the best algorithmic approach. CONCLUSIONS The promise of artificial intelligence in improving and standardizing gastric neoplasia detection, diagnosis, and segmentation is significant. This review is limited by predominantly single-center studies and undisclosed datasets used in AI training, impacting generalizability and demographic representation. Further, retrospective algorithm training may not reflect actual clinical performance, and a lack of model details hinders replication efforts. More research is needed to substantiate these findings, including larger-scale multi-center studies, prospective clinical trials, and comprehensive technical reporting of DL algorithms and datasets, particularly regarding the heterogeneity in DL algorithms and study designs.
Collapse
Affiliation(s)
- Eyal Klang
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
- ARC Innovation Center, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel
| | - Ali Sourosh
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Girish N. Nadkarni
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Kassem Sharif
- Department of Gastroenterology, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel;
| | - Adi Lahat
- Department of Gastroenterology, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel;
| |
Collapse
|
3
|
Huang Z, Wu J, Wang T, Li Z, Ioannou A. Class-Specific Distribution Alignment for semi-supervised medical image classification. Comput Biol Med 2023; 164:107280. [PMID: 37517324 DOI: 10.1016/j.compbiomed.2023.107280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 07/11/2023] [Accepted: 07/16/2023] [Indexed: 08/01/2023]
Abstract
Despite the success of deep neural networks in medical image classification, the problem remains challenging as data annotation is time-consuming, and the class distribution is imbalanced due to the relative scarcity of diseases. To address this problem, we propose Class-Specific Distribution Alignment (CSDA), a semi-supervised learning framework based on self-training that is suitable to learn from highly imbalanced datasets. Specifically, we first provide a new perspective to distribution alignment by considering the process as a change of basis in the vector space spanned by marginal predictions, and then derive CSDA to capture class-dependent marginal predictions on both labeled and unlabeled data, in order to avoid the bias towards majority classes. Furthermore, we propose a Variable Condition Queue (VCQ) module to maintain a proportionately balanced number of unlabeled samples for each class. Experiments on three public datasets HAM10000, CheXpert and Kvasir show that our method provides competitive performance on semi-supervised skin disease, thoracic disease, and endoscopic image classification tasks.
Collapse
Affiliation(s)
- Zhongzheng Huang
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China; College of Computer and Data Science, Fuzhou University, Fuzhou, China
| | - Jiawei Wu
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China; College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou, China
| | - Tao Wang
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China; International Digital Economy College, Minjiang University, Fuzhou, China.
| | - Zuoyong Li
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China.
| | - Anastasia Ioannou
- International Digital Economy College, Minjiang University, Fuzhou, China; Department of Computer Science and Engineering, European University Cyprus, Nicosia, Cyprus
| |
Collapse
|
4
|
Malik H, Anees T, Al-Shamaylehs AS, Alharthi SZ, Khalil W, Akhunzada A. Deep Learning-Based Classification of Chest Diseases Using X-rays, CT Scans, and Cough Sound Images. Diagnostics (Basel) 2023; 13:2772. [PMID: 37685310 PMCID: PMC10486427 DOI: 10.3390/diagnostics13172772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/14/2023] [Accepted: 08/21/2023] [Indexed: 09/10/2023] Open
Abstract
Chest disease refers to a variety of lung disorders, including lung cancer (LC), COVID-19, pneumonia (PNEU), tuberculosis (TB), and numerous other respiratory disorders. The symptoms (i.e., fever, cough, sore throat, etc.) of these chest diseases are similar, which might mislead radiologists and health experts when classifying chest diseases. Chest X-rays (CXR), cough sounds, and computed tomography (CT) scans are utilized by researchers and doctors to identify chest diseases such as LC, COVID-19, PNEU, and TB. The objective of the work is to identify nine different types of chest diseases, including COVID-19, edema (EDE), LC, PNEU, pneumothorax (PNEUTH), normal, atelectasis (ATE), and consolidation lung (COL). Therefore, we designed a novel deep learning (DL)-based chest disease detection network (DCDD_Net) that uses a CXR, CT scans, and cough sound images for the identification of nine different types of chest diseases. The scalogram method is used to convert the cough sounds into an image. Before training the proposed DCDD_Net model, the borderline (BL) SMOTE is applied to balance the CXR, CT scans, and cough sound images of nine chest diseases. The proposed DCDD_Net model is trained and evaluated on 20 publicly available benchmark chest disease datasets of CXR, CT scan, and cough sound images. The classification performance of the DCDD_Net is compared with four baseline models, i.e., InceptionResNet-V2, EfficientNet-B0, DenseNet-201, and Xception, as well as state-of-the-art (SOTA) classifiers. The DCDD_Net achieved an accuracy of 96.67%, a precision of 96.82%, a recall of 95.76%, an F1-score of 95.61%, and an area under the curve (AUC) of 99.43%. The results reveal that DCDD_Net outperformed the other four baseline models in terms of many performance evaluation metrics. Thus, the proposed DCDD_Net model can provide significant assistance to radiologists and medical experts. Additionally, the proposed model was also shown to be resilient by statistical evaluations of the datasets using McNemar and ANOVA tests.
Collapse
Affiliation(s)
- Hassaan Malik
- School of Systems and Technology, University of Management and Technology, Lahore 54770, Pakistan; (H.M.); (T.A.)
| | - Tayyaba Anees
- School of Systems and Technology, University of Management and Technology, Lahore 54770, Pakistan; (H.M.); (T.A.)
| | - Ahmad Sami Al-Shamaylehs
- Department of Networks and Cybersecurity, Faculty of Information Technology, Al-Ahliyya Amman University, Amman 19328, Jordan;
| | - Salman Z. Alharthi
- Department of Information System, College of Computers and Information Systems, Al-Lith Campus, Umm AL-Qura University, P.O. Box 7745, AL-Lith 21955, Saudi Arabia
| | - Wajeeha Khalil
- Department of Computer Science and Information Technology, University of Engineering and Technology Peshawar, Peshawar 25000, Pakistan;
| | - Adnan Akhunzada
- College of Computing & IT, University of Doha for Science and Technology, Doha P.O. Box 24449, Qatar;
| |
Collapse
|
5
|
Manzari ON, Ahmadabadi H, Kashiani H, Shokouhi SB, Ayatollahi A. MedViT: A robust vision transformer for generalized medical image classification. Comput Biol Med 2023; 157:106791. [PMID: 36958234 DOI: 10.1016/j.compbiomed.2023.106791] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 02/18/2023] [Accepted: 03/11/2023] [Indexed: 03/16/2023]
Abstract
Convolutional Neural Networks (CNNs) have advanced existing medical systems for automatic disease diagnosis. However, there are still concerns about the reliability of deep medical diagnosis systems against the potential threats of adversarial attacks since inaccurate diagnosis could lead to disastrous consequences in the safety realm. In this study, we propose a highly robust yet efficient CNN-Transformer hybrid model which is equipped with the locality of CNNs as well as the global connectivity of vision Transformers. To mitigate the high quadratic complexity of the self-attention mechanism while jointly attending to information in various representation subspaces, we construct our attention mechanism by means of an efficient convolution operation. Moreover, to alleviate the fragility of our Transformer model against adversarial attacks, we attempt to learn smoother decision boundaries. To this end, we augment the shape information of an image in the high-level feature space by permuting the feature mean and variance within mini-batches. With less computational complexity, our proposed hybrid model demonstrates its high robustness and generalization ability compared to the state-of-the-art studies on a large-scale collection of standardized MedMNIST-2D datasets.
Collapse
Affiliation(s)
- Omid Nejati Manzari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran.
| | - Hamid Ahmadabadi
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Hossein Kashiani
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, USA
| | - Shahriar B Shokouhi
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Ahmad Ayatollahi
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| |
Collapse
|
6
|
Tang S, Yu X, Cheang CF, Liang Y, Zhao P, Yu HH, Choi IC. Transformer-based multi-task learning for classification and segmentation of gastrointestinal tract endoscopic images. Comput Biol Med 2023; 157:106723. [PMID: 36907035 DOI: 10.1016/j.compbiomed.2023.106723] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/04/2023] [Accepted: 02/26/2023] [Indexed: 03/07/2023]
Abstract
Despite being widely utilized to help endoscopists identify gastrointestinal (GI) tract diseases using classification and segmentation, models based on convolutional neural network (CNN) have difficulties in distinguishing the similarities among some ambiguous types of lesions presented in endoscopic images, and in the training when lacking labeled datasets. Those will prevent CNN from further improving the accuracy of diagnosis. To address these challenges, we first proposed a Multi-task Network (TransMT-Net) capable of simultaneously learning two tasks (classification and segmentation), which has the transformer designed to learn global features and can combine the advantages of CNN in learning local features so that to achieve a more accurate prediction in identifying the lesion types and regions in GI tract endoscopic images. We further adopted the active learning in TransMT-Net to tackle the labeled image-hungry problem. A dataset was created from the CVC-ClinicDB dataset, Macau Kiang Wu Hospital, and Zhongshan Hospital to evaluate the model performance. Then, the experimental results show that our model not only achieved 96.94% accuracy in the classification task and 77.76% Dice Similarity Coefficient in the segmentation task but also outperformed those of other models on our test set. Meanwhile, active learning also produced positive results for the performance of our model with a small-scale initial training set, and even its performance with 30% of the initial training set was comparable to that of most comparable models with the full training set. Consequently, the proposed TransMT-Net has demonstrated its potential performance in GI tract endoscopic images and it through active learning can alleviate the shortage of labeled images.
Collapse
Affiliation(s)
- Suigu Tang
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macao Special Administrative Region of China
| | - Xiaoyuan Yu
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macao Special Administrative Region of China
| | - Chak Fong Cheang
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macao Special Administrative Region of China.
| | - Yanyan Liang
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macao Special Administrative Region of China
| | - Penghui Zhao
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macao Special Administrative Region of China
| | - Hon Ho Yu
- Kiang Wu Hospital, Macao Special Administrative Region of China
| | - I Cheong Choi
- Kiang Wu Hospital, Macao Special Administrative Region of China
| |
Collapse
|
7
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
8
|
Afriyie Y, Weyori BA, Opoku AA. A scaling up approach: a research agenda for medical imaging analysis with applications in deep learning. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Affiliation(s)
- Yaw Afriyie
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
- Department of Computer Science, Faculty of Information and Communication Technology, SD Dombo University of Business and Integrated Development Studies, Wa, Ghana
| | - Benjamin A. Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| | - Alex A. Opoku
- Department of Mathematics & Statistics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| |
Collapse
|
9
|
Cuevas-Rodriguez EO, Galvan-Tejada CE, Maeda-Gutiérrez V, Moreno-Chávez G, Galván-Tejada JI, Gamboa-Rosales H, Luna-García H, Moreno-Baez A, Celaya-Padilla JM. Comparative study of convolutional neural network architectures for gastrointestinal lesions classification. PeerJ 2023; 11:e14806. [PMID: 36945355 PMCID: PMC10024900 DOI: 10.7717/peerj.14806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 01/05/2023] [Indexed: 03/18/2023] Open
Abstract
The gastrointestinal (GI) tract can be affected by different diseases or lesions such as esophagitis, ulcers, hemorrhoids, and polyps, among others. Some of them can be precursors of cancer such as polyps. Endoscopy is the standard procedure for the detection of these lesions. The main drawback of this procedure is that the diagnosis depends on the expertise of the doctor. This means that some important findings may be missed. In recent years, this problem has been addressed by deep learning (DL) techniques. Endoscopic studies use digital images. The most widely used DL technique for image processing is the convolutional neural network (CNN) due to its high accuracy for modeling complex phenomena. There are different CNNs that are characterized by their architecture. In this article, four architectures are compared: AlexNet, DenseNet-201, Inception-v3, and ResNet-101. To determine which architecture best classifies GI tract lesions, a set of metrics; accuracy, precision, sensitivity, specificity, F1-score, and area under the curve (AUC) were used. These architectures were trained and tested on the HyperKvasir dataset. From this dataset, a total of 6,792 images corresponding to 10 findings were used. A transfer learning approach and a data augmentation technique were applied. The best performing architecture was DenseNet-201, whose results were: 97.11% of accuracy, 96.3% sensitivity, 99.67% specificity, and 95% AUC.
Collapse
|
10
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. ColoRectalCADx: Expeditious Recognition of Colorectal Cancer with Integrated Convolutional Neural Networks and Visual Explanations Using Mixed Dataset Evidence. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8723957. [PMID: 36404909 PMCID: PMC9671728 DOI: 10.1155/2022/8723957] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 10/27/2022] [Indexed: 12/07/2023]
Abstract
Colorectal cancer typically affects the gastrointestinal tract within the human body. Colonoscopy is one of the most accurate methods of detecting cancer. The current system facilitates the identification of cancer by computer-assisted diagnosis (CADx) systems with a limited number of deep learning methods. It does not imply the depiction of mixed datasets for the functioning of the system. The proposed system, called ColoRectalCADx, is supported by deep learning (DL) models suitable for cancer research. The CADx system comprises five stages: convolutional neural networks (CNN), support vector machine (SVM), long short-term memory (LSTM), visual explanation such as gradient-weighted class activation mapping (Grad-CAM), and semantic segmentation phases. Here, the key components of the CADx system are equipped with 9 individual and 12 integrated CNNs, implying that the system consists mainly of investigational experiments with a total of 21 CNNs. In the subsequent phase, the CADx has a combination of CNNs of concatenated transfer learning functions associated with the machine SVM classification. Additional classification is applied to ensure effective transfer of results from CNN to LSTM. The system is mainly made up of a combination of CVC Clinic DB, Kvasir2, and Hyper Kvasir input as a mixed dataset. After CNN and LSTM, in advanced stage, malignancies are detected by using a better polyp recognition technique with Grad-CAM and semantic segmentation using U-Net. CADx results have been stored on Google Cloud for record retention. In these experiments, among all the CNNs, the individual CNN DenseNet-201 (87.1% training and 84.7% testing accuracies) and the integrated CNN ADaDR-22 (84.61% training and 82.17% testing accuracies) were the most efficient for cancer detection with the CNN+LSTM model. ColoRectalCADx accurately identifies cancer through individual CNN DesnseNet-201 and integrated CNN ADaDR-22. In Grad-CAM's visual explanations, CNN DenseNet-201 displays precise visualization of polyps, and CNN U-Net provides precise malignant polyps.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| | - T. Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| |
Collapse
|
11
|
Montalbo FJP. Fusing Compressed Deep ConvNets with a Self-Normalizing Residual Block and Alpha Dropout for a Cost-Efficient Classification and Diagnosis of Gastrointestinal Tract Diseases. MethodsX 2022; 9:101925. [DOI: 10.1016/j.mex.2022.101925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 11/08/2022] [Indexed: 11/16/2022] Open
|
12
|
Su Q, Wang F, Chen D, Chen G, Li C, Wei L. Deep convolutional neural networks with ensemble learning and transfer learning for automated detection of gastrointestinal diseases. Comput Biol Med 2022; 150:106054. [PMID: 36244302 DOI: 10.1016/j.compbiomed.2022.106054] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 08/12/2022] [Accepted: 08/27/2022] [Indexed: 11/22/2022]
Abstract
Gastrointestinal (GI) diseases are serious health threats to human health, and the related detection and treatment of gastrointestinal diseases place a huge burden on medical institutions. Imaging-based methods are one of the most important approaches for automated detection of gastrointestinal diseases. Although deep neural networks have shown impressive performance in a number of imaging tasks, its application to detection of gastrointestinal diseases has not been sufficiently explored. In this study, we propose a novel and practical method to detect gastrointestinal disease from wireless capsule endoscopy (WCE) images by convolutional neural networks. The proposed method utilizes three backbone networks modified and fine-tuned by transfer learning as the feature extractors, and an integrated classifier using ensemble learning is trained to detection of gastrointestinal diseases. The proposed method outperforms existing computational methods on the benchmark dataset. The case study results show that the proposed method captures discriminative information of wireless capsule endoscopy images. This work shows the potential of using deep learning-based computer vision models for effective GI disease screening.
Collapse
Affiliation(s)
- Qiaosen Su
- School of Software, Shandong University, Jinan, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China
| | - Fengsheng Wang
- School of Software, Shandong University, Jinan, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China
| | | | | | - Chao Li
- Beidahuang Industry Group General Hospital, Harbin, China.
| | - Leyi Wei
- School of Software, Shandong University, Jinan, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China.
| |
Collapse
|
13
|
Dexterous Identification of Carcinoma through ColoRectalCADx with Dichotomous Fusion CNN and UNet Semantic Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4325412. [PMID: 36262620 PMCID: PMC9576362 DOI: 10.1155/2022/4325412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 08/16/2022] [Accepted: 08/20/2022] [Indexed: 11/18/2022]
Abstract
Human colorectal disorders in the digestive tract are recognized by reference colonoscopy. The current system recognizes cancer through a three-stage system that utilizes two sets of colonoscopy data. However, identifying polyps by visualization has not been addressed. The proposed system is a five-stage system called ColoRectalCADx, which provides three publicly accessible datasets as input data for cancer detection. The three main datasets are CVC Clinic DB, Kvasir2, and Hyper Kvasir. After the image preprocessing stages, system experiments were performed with the seven prominent convolutional neural networks (CNNs) (end-to-end) and nine fusion CNN models to extract the spatial features. Afterwards, the end-to-end CNN and fusion features are executed. These features are derived from Discrete Wavelet Transform (DWT) and Vector Support Machine (SVM) classification, that was used to retrieve time and spatial frequency features. Experimentally, the results were obtained for five stages. For each of the three datasets, from stage 1 to stage 3, end-to-end CNN, DenseNet-201 obtained the best testing accuracy (98%, 87%, 84%), ((98%, 97%), (87%, 87%), (84%, 84%)), ((99.03%, 99%), (88.45%, 88%), (83.61%, 84%)). For each of the three datasets, from stage 2, CNN DaRD-22 fusion obtained the optimal test accuracy ((93%, 97%) (82%, 84%), (69%, 57%)). And for stage 4, ADaRDEV2-22 fusion achieved the best test accuracy ((95.73%, 94%), (81.20%, 81%), (72.56%, 58%)). For the input image segmentation datasets CVC Clinc-Seg, KvasirSeg, and Hyper Kvasir, malignant polyps were identified with the UNet CNN model. Here, the loss score datasets (CVC clinic DB was 0.7842, Kvasir2 was 0.6977, and Hyper Kvasir was 0.6910) were obtained.
Collapse
|
14
|
A Novel Multi-Feature Fusion Method for Classification of Gastrointestinal Diseases Using Endoscopy Images. Diagnostics (Basel) 2022; 12:diagnostics12102316. [PMID: 36292006 PMCID: PMC9600128 DOI: 10.3390/diagnostics12102316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 09/02/2022] [Accepted: 09/06/2022] [Indexed: 11/17/2022] Open
Abstract
The first step in the diagnosis of gastric abnormalities is the detection of various abnormalities in the human gastrointestinal tract. Manual examination of endoscopy images relies on a medical practitioner’s expertise to identify inflammatory regions on the inner surface of the gastrointestinal tract. The length of the alimentary canal and the large volume of images obtained from endoscopic procedures make traditional detection methods time consuming and laborious. Recently, deep learning architectures have achieved better results in the classification of endoscopy images. However, visual similarities between different portions of the gastrointestinal tract pose a challenge for effective disease detection. This work proposes a novel system for the classification of endoscopy images by focusing on feature mining through convolutional neural networks (CNN). The model presented is built by combining a state-of-the-art architecture (i.e., EfficientNet B0) with a custom-built CNN architecture named Effimix. The proposed Effimix model employs a combination of squeeze and excitation layers and self-normalising activation layers for precise classification of gastrointestinal diseases. Experimental observations on the HyperKvasir dataset confirm the effectiveness of the proposed architecture for the classification of endoscopy images. The proposed model yields an accuracy of 97.99%, with an F1 score, precision, and recall of 97%, 97%, and 98%, respectively, which is significantly higher compared to the existing works.
Collapse
|
15
|
Security Analysis of Social Network Topic Mining Using Big Data and Optimized Deep Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8045968. [PMID: 36188706 PMCID: PMC9525195 DOI: 10.1155/2022/8045968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 09/06/2022] [Indexed: 11/17/2022]
Abstract
This research aims to conduct topic mining and data analysis of social network security using social network big data. At present, the main problem is that users’ behavior on social networks may reveal their private data. The main contribution lies in the establishment of a network security topic detection model combining Convolutional Neural Network (CNN) and social network big data technology. Deep Convolution Neural Network (DCNN) is utilized to complete the analysis and search of social network security issues. The Long Short-Term Memory (LSTM) algorithm is used for the extraction of Weibo topic information in the memory wisdom. Experimental results show that the recognition accuracy of the constructed model can reach 96.17% after 120 iterations, which is at least 5.4% higher than other models. Additionally, the accuracy, recall, and F1 value of the intrusion detection model are 88.57%, 75.22%, and 72.05%, respectively. Compared with other algorithms, the model’s accuracy, recall, and F1 value are at least 3.1% higher than other models. In addition, the training time and testing time of the improved DCNN network security detection model are stabilized at 65.86 s and 27.90 s, respectively. The prediction time of the improved DCNN network security detection model is significantly shortened compared with that of the models proposed by other scholars. The experimental conclusion is that the improved DCNN has the characteristics of lower delay under deep learning. The model shows good performance for network data security transmission.
Collapse
|
16
|
A Robust Deep Model for Classification of Peptic Ulcer and Other Digestive Tract Disorders Using Endoscopic Images. Biomedicines 2022; 10:biomedicines10092195. [PMID: 36140296 PMCID: PMC9496137 DOI: 10.3390/biomedicines10092195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/23/2022] [Accepted: 08/24/2022] [Indexed: 11/17/2022] Open
Abstract
Accurate patient disease classification and detection through deep-learning (DL) models are increasingly contributing to the area of biomedical imaging. The most frequent gastrointestinal (GI) tract ailments are peptic ulcers and stomach cancer. Conventional endoscopy is a painful and hectic procedure for the patient while Wireless Capsule Endoscopy (WCE) is a useful technology for diagnosing GI problems and doing painless gut imaging. However, there is still a challenge to investigate thousands of images captured during the WCE procedure accurately and efficiently because existing deep models are not scored with significant accuracy on WCE image analysis. So, to prevent emergency conditions among patients, we need an efficient and accurate DL model for real-time analysis. In this study, we propose a reliable and efficient approach for classifying GI tract abnormalities using WCE images by applying a deep Convolutional Neural Network (CNN). For this purpose, we propose a custom CNN architecture named GI Disease-Detection Network (GIDD-Net) that is designed from scratch with relatively few parameters to detect GI tract disorders more accurately and efficiently at a low computational cost. Moreover, our model successfully distinguishes GI disorders by visualizing class activation patterns in the stomach bowls as a heat map. The Kvasir-Capsule image dataset has a significant class imbalance problem, we exploited a synthetic oversampling technique BORDERLINE SMOTE (BL-SMOTE) to evenly distribute the image among the classes to prevent the problem of class imbalance. The proposed model is evaluated against various metrics and achieved the following values for evaluation metrics: 98.9%, 99.8%, 98.9%, 98.9%, 98.8%, and 0.0474 for accuracy, AUC, F1-score, precision, recall, and loss, respectively. From the simulation results, it is noted that the proposed model outperforms other state-of-the-art models in all the evaluation metrics.
Collapse
|
17
|
Higuchi N, Hiraga H, Sasaki Y, Hiraga N, Igarashi S, Hasui K, Ogasawara K, Maeda T, Murai Y, Tatsuta T, Kikuchi H, Chinda D, Mikami T, Matsuzaka M, Sakuraba H, Fukuda S. Automated evaluation of colon capsule endoscopic severity of ulcerative colitis using ResNet50. PLoS One 2022; 17:e0269728. [PMID: 35687553 PMCID: PMC9187078 DOI: 10.1371/journal.pone.0269728] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 05/26/2022] [Indexed: 12/19/2022] Open
Abstract
Capsule endoscopy has been widely used as a non-invasive diagnostic tool for small or large intestinal lesions. In recent years, automated lesion detection systems using machine learning have been devised. This study aimed to develop an automated system for capsule endoscopic severity in patients with ulcerative colitis along the entire length of the colon using ResNet50. Capsule endoscopy videos from patients with ulcerative colitis were collected prospectively. Each single examination video file was partitioned into four segments: the cecum and ascending colon, transverse colon, descending and sigmoid colon, and rectum. Fifty still pictures (576 × 576 pixels) were extracted from each partitioned video. A patch (128 × 128 pixels) was trimmed from the still picture at every 32-pixel-strides. A total of 739,021 patch images were manually classified into six categories: 0) Mayo endoscopic subscore (MES) 0, 1) MES1, 2) MES2, 3) MES3, 4) inadequate quality for evaluation, and 5) ileal mucosa. ResNet50, a deep learning framework, was trained using 483,644 datasets and validated using 255,377 independent datasets. In total, 31 capsule endoscopy videos from 22 patients were collected. The accuracy rates of the training and validation datasets were 0.992 and 0.973, respectively. An automated evaluation system for the capsule endoscopic severity of ulcerative colitis was developed. This could be a useful tool for assessing topographic disease activity, thus decreasing the burden of image interpretation on endoscopists.
Collapse
Affiliation(s)
- Naoki Higuchi
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Hiroto Hiraga
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
- * E-mail:
| | - Yoshihiro Sasaki
- Department of Medical Informatics, Hirosaki University Hospital, Hirosaki, Japan
| | - Noriko Hiraga
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Shohei Igarashi
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Keisuke Hasui
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Kohei Ogasawara
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Takato Maeda
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Yasuhisa Murai
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Tetsuya Tatsuta
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Hidezumi Kikuchi
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Daisuke Chinda
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Tatsuya Mikami
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Masashi Matsuzaka
- Department of Medical Informatics, Hirosaki University Hospital, Hirosaki, Japan
| | - Hirotake Sakuraba
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Shinsaku Fukuda
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| |
Collapse
|
18
|
Fujita H, Wakiya T, Ishido K, Kimura N, Nagase H, Kanda T, Matsuzaka M, Sasaki Y, Hakamada K. Differential diagnoses of gallbladder tumors using
CT‐based
deep learning. Ann Gastroenterol Surg 2022; 6:823-832. [PMID: 36338581 PMCID: PMC9628252 DOI: 10.1002/ags3.12589] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 05/29/2022] [Indexed: 11/08/2022] Open
Abstract
Background The differential diagnosis between gallbladder cancer (GBC) and xanthogranulomatous cholecystitis (XGC) remains quite challenging, and can possibly lead to improper surgery. This study aimed to distinguish between XGC and GBC by combining computed tomography (CT) images and deep learning (DL) to maximize the therapeutic success of surgery. Methods We collected a dataset, including preoperative CT images, from 28 cases of GBC and 21 XGC patients undergoing surgery at our facility. It was subdivided into training and validation (n = 40), and test (n = 9) datasets. We built a CT patch‐based discriminating model using a residual convolutional neural network and employed 5‐fold cross‐validation. The discriminating performance of the model was analyzed in the test dataset. Results Of the 40 patients in the training dataset, GBC and XGC were observed in 21 (52.5%), and 19 (47.5%) patients, respectively. A total of 61 126 patches were extracted from the 40 patients. In the validation dataset, the average sensitivity, specificity, and accuracy were 98.8%, 98.0%, and 98.5%, respectively. Furthermore, the area under the receiver operating characteristic curve (AUC) was 0.9985. In the test dataset, which included 11 738 patches, the discriminating accuracy for GBC patients after neoadjuvant chemotherapy (NAC) (n = 3) was insufficient (61.8%). However, the discriminating model demonstrated high accuracy (98.2%) and AUC (0.9893) for cases other than those receiving NAC. Conclusion Our CT‐based DL model exhibited high discriminating performance in patients with GBC and XGC. Our study proposes a novel concept for selecting the appropriate procedure and avoiding unnecessary invasive measures.
Collapse
Affiliation(s)
- Hiroaki Fujita
- Department of Gastroenterological Surgery Hirosaki University Graduate School of Medicine Hirosaki Japan
| | - Taiichi Wakiya
- Department of Gastroenterological Surgery Hirosaki University Graduate School of Medicine Hirosaki Japan
| | - Keinosuke Ishido
- Department of Gastroenterological Surgery Hirosaki University Graduate School of Medicine Hirosaki Japan
| | - Norihisa Kimura
- Department of Gastroenterological Surgery Hirosaki University Graduate School of Medicine Hirosaki Japan
| | - Hayato Nagase
- Department of Gastroenterological Surgery Hirosaki University Graduate School of Medicine Hirosaki Japan
| | - Taishu Kanda
- Department of Gastroenterological Surgery Hirosaki University Graduate School of Medicine Hirosaki Japan
| | - Masashi Matsuzaka
- Department of Medical Informatics Hirosaki University Hospital Hirosaki Japan
| | - Yoshihiro Sasaki
- Department of Medical Informatics Hirosaki University Hospital Hirosaki Japan
| | - Kenichi Hakamada
- Department of Gastroenterological Surgery Hirosaki University Graduate School of Medicine Hirosaki Japan
| |
Collapse
|
19
|
Renna F, Martins M, Neto A, Cunha A, Libânio D, Dinis-Ribeiro M, Coimbra M. Artificial Intelligence for Upper Gastrointestinal Endoscopy: A Roadmap from Technology Development to Clinical Practice. Diagnostics (Basel) 2022; 12:diagnostics12051278. [PMID: 35626433 PMCID: PMC9141387 DOI: 10.3390/diagnostics12051278] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/14/2022] [Accepted: 05/18/2022] [Indexed: 02/05/2023] Open
Abstract
Stomach cancer is the third deadliest type of cancer in the world (0.86 million deaths in 2017). In 2035, a 20% increase will be observed both in incidence and mortality due to demographic effects if no interventions are foreseen. Upper GI endoscopy (UGIE) plays a paramount role in early diagnosis and, therefore, improved survival rates. On the other hand, human and technical factors can contribute to misdiagnosis while performing UGIE. In this scenario, artificial intelligence (AI) has recently shown its potential in compensating for the pitfalls of UGIE, by leveraging deep learning architectures able to efficiently recognize endoscopic patterns from UGIE video data. This work presents a review of the current state-of-the-art algorithms in the application of AI to gastroscopy. It focuses specifically on the threefold tasks of assuring exam completeness (i.e., detecting the presence of blind spots) and assisting in the detection and characterization of clinical findings, both gastric precancerous conditions and neoplastic lesion changes. Early and promising results have already been obtained using well-known deep learning architectures for computer vision, but many algorithmic challenges remain in achieving the vision of AI-assisted UGIE. Future challenges in the roadmap for the effective integration of AI tools within the UGIE clinical practice are discussed, namely the adoption of more robust deep learning architectures and methods able to embed domain knowledge into image/video classifiers as well as the availability of large, annotated datasets.
Collapse
Affiliation(s)
- Francesco Renna
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Faculdade de Ciências, Universidade do Porto, 4169-007 Porto, Portugal
- Correspondence:
| | - Miguel Martins
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Faculdade de Ciências, Universidade do Porto, 4169-007 Porto, Portugal
| | - Alexandre Neto
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Escola de Ciências e Tecnologia, Universidade de Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal
| | - António Cunha
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Escola de Ciências e Tecnologia, Universidade de Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal
| | - Diogo Libânio
- Departamento de Ciências da Informação e da Decisão em Saúde/Centro de Investigação em Tecnologias e Serviços de Saúde (CIDES/CINTESIS), Faculdade de Medicina, Universidade do Porto, 4200-319 Porto, Portugal; (D.L.); (M.D.-R.)
| | - Mário Dinis-Ribeiro
- Departamento de Ciências da Informação e da Decisão em Saúde/Centro de Investigação em Tecnologias e Serviços de Saúde (CIDES/CINTESIS), Faculdade de Medicina, Universidade do Porto, 4200-319 Porto, Portugal; (D.L.); (M.D.-R.)
| | - Miguel Coimbra
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Faculdade de Ciências, Universidade do Porto, 4169-007 Porto, Portugal
| |
Collapse
|
20
|
Wakiya T, Ishido K, Kimura N, Nagase H, Kanda T, Ichiyama S, Soma K, Matsuzaka M, Sasaki Y, Kubota S, Fujita H, Sawano T, Umehara Y, Wakasa Y, Toyoki Y, Hakamada K. CT-based deep learning enables early postoperative recurrence prediction for intrahepatic cholangiocarcinoma. Sci Rep 2022; 12:8428. [PMID: 35590089 PMCID: PMC9120508 DOI: 10.1038/s41598-022-12604-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 05/05/2022] [Indexed: 01/06/2023] Open
Abstract
Preoperatively accurate evaluation of risk for early postoperative recurrence contributes to maximizing the therapeutic success for intrahepatic cholangiocarcinoma (iCCA) patients. This study aimed to investigate the potential of deep learning (DL) algorithms for predicting postoperative early recurrence through the use of preoperative images. We collected the dataset, including preoperative plain computed tomography (CT) images, from 41 patients undergoing curative surgery for iCCA at multiple institutions. We built a CT patch-based predictive model using a residual convolutional neural network and used fivefold cross-validation. The prediction accuracy of the model was analyzed. We defined early recurrence as recurrence within a year after surgical resection. Of the 41 patients, early recurrence was observed in 20 (48.8%). A total of 71,081 patches were extracted from the entire segmented tumor area of each patient. The average accuracy of the ResNet model for predicting early recurrence was 98.2% for the training dataset. In the validation dataset, the average sensitivity, specificity, and accuracy were 97.8%, 94.0%, and 96.5%, respectively. Furthermore, the area under the receiver operating characteristic curve was 0.994. Our CT-based DL model exhibited high predictive performance in projecting postoperative early recurrence, proposing a novel insight into iCCA management.
Collapse
Affiliation(s)
- Taiichi Wakiya
- Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki City, Aomori, 036-8562, Japan.
| | - Keinosuke Ishido
- Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki City, Aomori, 036-8562, Japan
| | - Norihisa Kimura
- Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki City, Aomori, 036-8562, Japan
| | - Hayato Nagase
- Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki City, Aomori, 036-8562, Japan
| | - Taishu Kanda
- Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki City, Aomori, 036-8562, Japan
| | - Sotaro Ichiyama
- Hirosaki University School of Medicine, Hirosaki City, Aomori, 036-8562, Japan
| | - Kenji Soma
- Hirosaki University School of Medicine, Hirosaki City, Aomori, 036-8562, Japan
| | - Masashi Matsuzaka
- Department of Medical Informatics, Hirosaki University Hospital, Hirosaki City, Aomori, 036-8562, Japan
| | - Yoshihiro Sasaki
- Department of Medical Informatics, Hirosaki University Hospital, Hirosaki City, Aomori, 036-8562, Japan
| | - Shunsuke Kubota
- Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki City, Aomori, 036-8562, Japan
| | - Hiroaki Fujita
- Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki City, Aomori, 036-8562, Japan
| | - Takeyuki Sawano
- Department of Surgery, Aomori Prefectural Central Hospital, Aomori City, Aomori, 030-8553, Japan
| | - Yutaka Umehara
- Department of Surgery, Aomori Prefectural Central Hospital, Aomori City, Aomori, 030-8553, Japan
| | - Yusuke Wakasa
- Department of Surgery, Aomori City Hospital, Aomori City, Aomori, 0300821, Japan
| | - Yoshikazu Toyoki
- Department of Surgery, Aomori City Hospital, Aomori City, Aomori, 0300821, Japan
| | - Kenichi Hakamada
- Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki City, Aomori, 036-8562, Japan
| |
Collapse
|
21
|
Application of Unsupervised Transfer Technique Based on Deep Learning Model in Physical Training. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8679221. [PMID: 35463226 PMCID: PMC9023208 DOI: 10.1155/2022/8679221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/25/2022] [Accepted: 02/02/2022] [Indexed: 11/17/2022]
Abstract
The research purpose is to study the standardization and scientizing of physical training actions. Stacking denoising auto encoder (SDAE), a BiLSTM deep network model (SDAL-DNM) (a kind of training action model), and an unsupervised transfer model are used to deeply study the action problem of physical training. Initially, the physical training action discrimination model adopted here is a combination of stacked noise reduction self-encoder and bidirectional depth network model. Then, this model can collect data for five actions in physical training and further analyze the importance of action standardization for physical training. Afterward, the SDAL-DNM implemented here fully integrates the advantages of SDAE and BiLSTM. Finally, the unsupervised transfer model adopted here is based on SDAL-DNM deep learning (DL). The movement data of the physical training crowd are collected, and then the unsupervised transfer model is trained. According to the movement characteristics of physical training, the data difference between trainers is calculated so that the actions of each trainer can be continuously adapted according to the model, and finally, the benefits of effectively distinguishing the training actions can be achieved. The research shows that before and after unsupervised learning, the average decline of the model used is 1.69%, while the average decline of extreme learning machine (ELM) is 5.5%. The conclusion is that the unsupervised transfer model can improve the discrimination accuracy of physical training actions and provide theoretical support to effectively correct mistakes in physical training actions.
Collapse
|
22
|
Tang S, Yu X, Cheang CF, Hu Z, Fang T, Choi IC, Yu HH. Diagnosis of Esophageal Lesions by Multi-Classification and Segmentation Using an Improved Multi-Task Deep Learning Model. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22041492. [PMID: 35214396 PMCID: PMC8876234 DOI: 10.3390/s22041492] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/26/2022] [Accepted: 02/08/2022] [Indexed: 05/03/2023]
Abstract
It is challenging for endoscopists to accurately detect esophageal lesions during gastrointestinal endoscopic screening due to visual similarities among different lesions in terms of shape, size, and texture among patients. Additionally, endoscopists are busy fighting esophageal lesions every day, hence the need to develop a computer-aided diagnostic tool to classify and segment the lesions at endoscopic images to reduce their burden. Therefore, we propose a multi-task classification and segmentation (MTCS) model, including the Esophageal Lesions Classification Network (ELCNet) and Esophageal Lesions Segmentation Network (ELSNet). The ELCNet was used to classify types of esophageal lesions, and the ELSNet was used to identify lesion regions. We created a dataset by collecting 805 esophageal images from 255 patients and 198 images from 64 patients to train and evaluate the MTCS model. Compared with other methods, the proposed not only achieved a high accuracy (93.43%) in classification but achieved a dice similarity coefficient (77.84%) in segmentation. In conclusion, the MTCS model can boost the performance of endoscopists in the detection of esophageal lesions as it can accurately multi-classify and segment the lesions and is a potential assistant for endoscopists to reduce the risk of oversight.
Collapse
Affiliation(s)
- Suigu Tang
- Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China; (S.T.); (X.Y.); (Z.H.); (T.F.)
| | - Xiaoyuan Yu
- Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China; (S.T.); (X.Y.); (Z.H.); (T.F.)
| | - Chak-Fong Cheang
- Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China; (S.T.); (X.Y.); (Z.H.); (T.F.)
- Correspondence:
| | - Zeming Hu
- Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China; (S.T.); (X.Y.); (Z.H.); (T.F.)
| | - Tong Fang
- Faculty of Information Technology, Macau University of Science and Technology, Macau 999078, China; (S.T.); (X.Y.); (Z.H.); (T.F.)
| | - I-Cheong Choi
- Kiang Wu Hospital, Macau 999078, China; (I.-C.C.); (H.-H.Y.)
| | - Hon-Ho Yu
- Kiang Wu Hospital, Macau 999078, China; (I.-C.C.); (H.-H.Y.)
| |
Collapse
|
23
|
Jin Z, Gan T, Wang P, Fu Z, Zhang C, Yan Q, Zheng X, Liang X, Ye X. Deep learning for gastroscopic images: computer-aided techniques for clinicians. Biomed Eng Online 2022; 21:12. [PMID: 35148764 PMCID: PMC8832738 DOI: 10.1186/s12938-022-00979-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 01/21/2022] [Indexed: 12/13/2022] Open
Abstract
Gastric disease is a major health problem worldwide. Gastroscopy is the main method and the gold standard used to screen and diagnose many gastric diseases. However, several factors, such as the experience and fatigue of endoscopists, limit its performance. With recent advancements in deep learning, an increasing number of studies have used this technology to provide on-site assistance during real-time gastroscopy. This review summarizes the latest publications on deep learning applications in overcoming disease-related and nondisease-related gastroscopy challenges. The former aims to help endoscopists find lesions and characterize them when they appear in the view shed of the gastroscope. The purpose of the latter is to avoid missing lesions due to poor-quality frames, incomplete inspection coverage of gastroscopy, etc., thus improving the quality of gastroscopy. This study aims to provide technical guidance and a comprehensive perspective for physicians to understand deep learning technology in gastroscopy. Some key issues to be handled before the clinical application of deep learning technology and the future direction of disease-related and nondisease-related applications of deep learning to gastroscopy are discussed herein.
Collapse
Affiliation(s)
- Ziyi Jin
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, People's Republic of China
| | - Tianyuan Gan
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, People's Republic of China
| | - Peng Wang
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, People's Republic of China
| | - Zuoming Fu
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, People's Republic of China
| | - Chongan Zhang
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, People's Republic of China
| | - Qinglai Yan
- Hangzhou Center for Medical Device Quality Supervision and Testing, CFDA, Hangzhou, 310000, People's Republic of China
| | - Xueyong Zheng
- Department of General Surgery, Sir Run-Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, 310016, People's Republic of China
| | - Xiao Liang
- Department of General Surgery, Sir Run-Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, 310016, People's Republic of China
| | - Xuesong Ye
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, People's Republic of China.
| |
Collapse
|
24
|
Qin C, Hu W, Wang X, Ma X. Application of Artificial Intelligence in Diagnosis of Craniopharyngioma. Front Neurol 2022; 12:752119. [PMID: 35069406 PMCID: PMC8770750 DOI: 10.3389/fneur.2021.752119] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 11/12/2021] [Indexed: 12/24/2022] Open
Abstract
Craniopharyngioma is a congenital brain tumor with clinical characteristics of hypothalamic-pituitary dysfunction, increased intracranial pressure, and visual field disorder, among other injuries. Its clinical diagnosis mainly depends on radiological examinations (such as Computed Tomography, Magnetic Resonance Imaging). However, assessing numerous radiological images manually is a challenging task, and the experience of doctors has a great influence on the diagnosis result. The development of artificial intelligence has brought about a great transformation in the clinical diagnosis of craniopharyngioma. This study reviewed the application of artificial intelligence technology in the clinical diagnosis of craniopharyngioma from the aspects of differential classification, prediction of tissue invasion and gene mutation, prognosis prediction, and so on. Based on the reviews, the technical route of intelligent diagnosis based on the traditional machine learning model and deep learning model were further proposed. Additionally, in terms of the limitations and possibilities of the development of artificial intelligence in craniopharyngioma diagnosis, this study discussed the attentions required in future research, including few-shot learning, imbalanced data set, semi-supervised models, and multi-omics fusion.
Collapse
Affiliation(s)
- Caijie Qin
- Institute of Information Engineering, Sanming University, Sanming, China
| | - Wenxing Hu
- University of New South Wales, Sydney, NSW, Australia
| | - Xinsheng Wang
- School of Information Science and Engineering, Harbin Institute of Technology at Weihai, Weihai, China
| | - Xibo Ma
- CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
25
|
Liu F, Liu X, Yin C, Wang H. Nursing Value Analysis and Risk Assessment of Acute Gastrointestinal Bleeding Using Multiagent Reinforcement Learning Algorithm. Gastroenterol Res Pract 2022; 2022:7874751. [PMID: 35035476 PMCID: PMC8758331 DOI: 10.1155/2022/7874751] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 11/29/2021] [Accepted: 12/06/2021] [Indexed: 11/23/2022] Open
Abstract
Gastrointestinal bleeding (GIB) indicates an issue in the digestive system. Blood can be found in feces or vomiting; however, it is not always visible, even if it makes the stool appear darkish or muddy. The bleeding can range in harshness from light to severe and can be dangerous. It is advised that nursing value analysis and risk assessment of patients with GIB is essential, but existing risk assessment techniques function inconsistently. Machine learning (ML) has the potential to increase risk evaluation. For evaluating risk in patients with GIB, scoring techniques are ineffective; a machine learning method would help. As a result, we present а unique machine learning-based nursing value analysis and risk assessment framework in this research to construct a model to evaluate the risk of hospital-based interventions or mortality in individuals with GIB and make a comparison to that of other rating systems. Initially, the dataset is collected, and preprocessing is done. Feature extraction is done using local binary patterns (LBP). Classification is performed using a fuzzy support vector machine (FSVM) classifier. For risk assessment and nursing value analysis, machine learning-based prediction using a multiagent reinforcement algorithm is employed. For improving the performance of the proposed system, we use spider monkey optimization (SMO) algorithm. The performance metrics like classification accuracy, area under the receiver-operating characteristic curve (AUROC), area under the curve (AUC), sensitivity, specificity, and precision are analyzed and compared with the traditional approaches. In individuals with GIB, the suggested technique had a good-excellent prognostic efficacy, and it outperformed other traditional models.
Collapse
Affiliation(s)
- Fang Liu
- Neurosurgery Department, The Affiliated Yantai Yuhuangding Hospital of Qingdao University, China
| | - Xiaoli Liu
- Department of Infection Management, Dongying People's Hospital, China
| | - Changyou Yin
- Neurosurgery Department, The Affiliated Yantai Yuhuangding Hospital of Qingdao University, China
| | - Hongrong Wang
- Emergency Department, The Affiliated Yantai Yuhuangding Hospital of Qingdao University, China
| |
Collapse
|
26
|
Cengil E, Çınar A. The effect of deep feature concatenation in the classification problem: An approach on COVID-19 disease detection. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:26-40. [PMID: 34898851 PMCID: PMC8653237 DOI: 10.1002/ima.22659] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 08/04/2021] [Accepted: 09/16/2021] [Indexed: 06/01/2023]
Abstract
In image classification applications, the most important thing is to obtain useful features. Convolutional neural networks automatically learn the extracted features during training. The classification process is carried out with the obtained features. Therefore, obtaining successful features is critical to achieving high classification success. This article focuses on providing effective features to enhance classification performance. For this purpose, the success of the process of concatenating features in classification is taken as basis. At first, the features acquired by feature transfer method are extracted from AlexNet, Xception, NASNETLarge, and EfficientNet-B0 architectures, which are known to be successful in classification problems. Concatenating the features results in the creation of a new feature set. The method is completed by subjecting the features to various classification algorithms. The proposed pipeline is applied to the three datasets: "COVID-19 Image Dataset," "COVID-19 Pneumonia Normal Chest X-ray (PA) Dataset," and "COVID-19 Radiography Database" for COVID-19 disease detection. The whole datasets contain three classes (normal, COVID, and pneumonia). The best classification accuracies for the three datasets are 98.8%, 95.9%, and 99.6%, respectively. Performance metrics are given such as: sensitivity, precision, specificity, and F1-score values, as well. Contribution of paper is as follows: COVID-19 disease is similar to other lung infections. This situation makes diagnosis difficult. Furthermore, the virus's rapid spread necessitates the need to detect cases as soon as possible. There has been an increased curiosity in computer-aided deep learning models to provide the requirements. The use of the proposed method will be beneficial as it provides high accuracy.
Collapse
Affiliation(s)
- Emine Cengil
- Department of Computer Engineering, Faculty of EngineeringFirat UniversityElazigTurkey
| | - Ahmet Çınar
- Department of Computer Engineering, Faculty of EngineeringFirat UniversityElazigTurkey
| |
Collapse
|
27
|
Yu X, Tang S, Cheang CF, Yu HH, Choi IC. Multi-Task Model for Esophageal Lesion Analysis Using Endoscopic Images: Classification with Image Retrieval and Segmentation with Attention. SENSORS 2021; 22:s22010283. [PMID: 35009825 PMCID: PMC8749873 DOI: 10.3390/s22010283] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 12/24/2021] [Accepted: 12/27/2021] [Indexed: 12/12/2022]
Abstract
The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role of endoscopists in decision making, because endoscopists are expected to correct the false results predicted by the diagnosis system if more supporting information is provided. In order to help endoscopists improve the diagnosis accuracy in identifying the types of lesions, an image retrieval module is added in the classification task to provide an additional confidence level of the predicted types of esophageal lesions. In addition, a mutual attention module is added in the segmentation task to improve its performance in determining the locations of esophageal lesions. The proposed model is evaluated and compared with other deep learning models using a dataset of 1003 endoscopic images, including 290 esophageal cancer, 473 esophagitis, and 240 normal. The experimental results show the promising performance of our model with a high accuracy of 96.76% for the classification and a Dice coefficient of 82.47% for the segmentation. Consequently, the proposed multi-task deep learning model can be an effective tool to help endoscopists in judging esophageal lesions.
Collapse
Affiliation(s)
- Xiaoyuan Yu
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
| | - Suigu Tang
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
| | - Chak Fong Cheang
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
- Correspondence: (C.F.C.); (H.H.Y.)
| | - Hon Ho Yu
- Kiang Wu Hospital, Santo António, Macau;
- Correspondence: (C.F.C.); (H.H.Y.)
| | | |
Collapse
|
28
|
Gastrointestinal Tract Disease Classification from Wireless Endoscopy Images Using Pretrained Deep Learning Model. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5940433. [PMID: 34545292 PMCID: PMC8449743 DOI: 10.1155/2021/5940433] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 07/03/2021] [Accepted: 08/16/2021] [Indexed: 12/28/2022]
Abstract
Wireless capsule endoscopy is a noninvasive wireless imaging technology that becomes increasingly popular in recent years. One of the major drawbacks of this technology is that it generates a large number of photos that must be analyzed by medical personnel, which takes time. Various research groups have proposed different image processing and machine learning techniques to classify gastrointestinal tract diseases in recent years. Traditional image processing algorithms and a data augmentation technique are combined with an adjusted pretrained deep convolutional neural network to classify diseases in the gastrointestinal tract from wireless endoscopy images in this research. We take advantage of pretrained models VGG16, ResNet-18, and GoogLeNet, a convolutional neural network (CNN) model with adjusted fully connected and output layers. The proposed models are validated with a dataset consisting of 6702 images of 8 classes. The VGG16 model achieved the highest results with 96.33% accuracy, 96.37% recall, 96.5% precision, and 96.5% F1-measure. Compared to other state-of-the-art models, the VGG16 model has the highest Matthews Correlation Coefficient value of 0.95 and Cohen's kappa score of 0.96.
Collapse
|
29
|
Han J, Wang D, Li Z, Dey N, Crespo RG, Shi F. Plantar pressure image classification employing residual-network model-based conditional generative adversarial networks: a comparison of normal, planus, and talipes equinovarus feet. Soft comput 2021. [DOI: 10.1007/s00500-021-06073-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
30
|
Yang Y, Li YX, Yao RQ, Du XH, Ren C. Artificial intelligence in small intestinal diseases: Application and prospects. World J Gastroenterol 2021; 27:3734-3747. [PMID: 34321840 PMCID: PMC8291013 DOI: 10.3748/wjg.v27.i25.3734] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/09/2021] [Accepted: 05/08/2021] [Indexed: 02/06/2023] Open
Abstract
The small intestine is located in the middle of the gastrointestinal tract, so small intestinal diseases are more difficult to diagnose than other gastrointestinal diseases. However, with the extensive application of artificial intelligence in the field of small intestinal diseases, with its efficient learning capacities and computational power, artificial intelligence plays an important role in the auxiliary diagnosis and prognosis prediction based on the capsule endoscopy and other examination methods, which improves the accuracy of diagnosis and prediction and reduces the workload of doctors. In this review, a comprehensive retrieval was performed on articles published up to October 2020 from PubMed and other databases. Thereby the application status of artificial intelligence in small intestinal diseases was systematically introduced, and the challenges and prospects in this field were also analyzed.
Collapse
Affiliation(s)
- Yu Yang
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Yu-Xuan Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ren-Qi Yao
- Trauma Research Center, The Fourth Medical Center and Medical Innovation Research Division of the Chinese People‘s Liberation Army General Hospital, Beijing 100048, China
- Department of Burn Surgery, Changhai Hospital, Naval Medical University, Shanghai 200433, China
| | - Xiao-Hui Du
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Chao Ren
- Trauma Research Center, The Fourth Medical Center and Medical Innovation Research Division of the Chinese People‘s Liberation Army General Hospital, Beijing 100048, China
| |
Collapse
|
31
|
Classification of COVID-19 chest X-Ray and CT images using a type of dynamic CNN modification method. Comput Biol Med 2021; 134:104425. [PMID: 33971427 PMCID: PMC8081579 DOI: 10.1016/j.compbiomed.2021.104425] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 04/17/2021] [Accepted: 04/17/2021] [Indexed: 12/16/2022]
Abstract
Understanding and classifying Chest X-Ray (CXR) and computerised tomography (CT) images are of great significance for COVID-19 diagnosis. The existing research on the classification for COVID-19 cases faces the challenges of data imbalance, insufficient generalisability, the lack of comparative study, etc. To address these problems, this paper proposes a type of modified MobileNet to classify COVID-19 CXR images and a modified ResNet architecture for CT image classification. In particular, a modification method of convolutional neural networks (CNN) is designed to solve the gradient vanishing problem and improve the classification performance through dynamically combining features in different layers of a CNN. The modified MobileNet is applied to the classification of COVID-19, Tuberculosis, viral pneumonia (with the exception of COVID-19), bacterial pneumonia and normal controls using CXR images. Also, the proposed modified ResNet is used for the classification of COVID-19, non-COVID-19 infections and normal controls using CT images. The results show that the proposed methods achieve 99.6% test accuracy on the five-category CXR image dataset and 99.3% test accuracy on the CT image dataset. Six advanced CNN architectures and two specific COVID-19 detection models, i.e., COVID-Net and COVIDNet-CT are used in comparative studies. Two benchmark datasets and a CXR image dataset which combines eight different CXR image sources are employed to evaluate the performance of the above models. The results show that the proposed methods outperform the comparative models in classification accuracy, sensitivity, and precision, which demonstrate their potential in computer-aided diagnosis for healthcare applications.
Collapse
|
32
|
3D-semantic segmentation and classification of stomach infections using uncertainty aware deep neural networks. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00328-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
AbstractWireless capsule endoscopy (WCE) might move through human body and captures the small bowel and captures the video and require the analysis of all frames of video due to which the diagnosis of gastrointestinal infections by the physician is a tedious task. This tiresome assignment has fuelled the researcher’s efforts to present an automated technique for gastrointestinal infections detection. The segmentation of stomach infections is a challenging task because the lesion region having low contrast and irregular shape and size. To handle this challenging task, in this research work a new deep semantic segmentation model is suggested for 3D-segmentation of the different types of stomach infections. In the segmentation model, deep labv3 is employed as a backbone of the ResNet-50 model. The model is trained with ground-masks and accurately performs pixel-wise classification in the testing phase. Similarity among the different types of stomach lesions accurate classification is a difficult task, which is addressed in this reported research by extracting deep features from global input images using a pre-trained ResNet-50 model. Furthermore, the latest advances in the estimation of uncertainty and model interpretability in the classification of different types of stomach infections is presented. The classification results estimate uncertainty related to the vital features in input and show how uncertainty and interpretability might be modeled in ResNet-50 for the classification of the different types of stomach infections. The proposed model achieved up to 90% prediction scores to authenticate the method performance.
Collapse
|
33
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
34
|
N.S. A, D. S, S. RK. Naive Bayesian fusion based deep learning networks for multisegmented classification of fishes in aquaculture industries. ECOL INFORM 2021. [DOI: 10.1016/j.ecoinf.2021.101248] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
35
|
Sriporn K, Tsai CF, Tsai CE, Wang P. Analyzing Malaria Disease Using Effective Deep Learning Approach. Diagnostics (Basel) 2020; 10:diagnostics10100744. [PMID: 32987888 PMCID: PMC7601431 DOI: 10.3390/diagnostics10100744] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 09/23/2020] [Accepted: 09/23/2020] [Indexed: 11/16/2022] Open
Abstract
Medical tools used to bolster decision-making by medical specialists who offer malaria treatment include image processing equipment and a computer-aided diagnostic system. Malaria images can be employed to identify and detect malaria using these methods, in order to monitor the symptoms of malaria patients, although there may be atypical cases that need more time for an assessment. This research used 7000 images of Xception, Inception-V3, ResNet-50, NasNetMobile, VGG-16 and AlexNet models for verification and analysis. These are prevalent models that classify the image precision and use a rotational method to improve the performance of validation and the training dataset with convolutional neural network models. Xception, using the state of the art activation function (Mish) and optimizer (Nadam), improved the effectiveness, as found by the outcomes of the convolutional neural model evaluation of these models for classifying the malaria disease from thin blood smear images. In terms of the performance, recall, accuracy, precision, and F1 measure, a combined score of 99.28% was achieved. Consequently, 10% of all non-dataset training and testing images were evaluated utilizing this pattern. Notable aspects for the improvement of a computer-aided diagnostic to produce an optimum malaria detection approach have been found, supported by a 98.86% accuracy level.
Collapse
Affiliation(s)
- Krit Sriporn
- Department of Tropical Agriculture and International Cooperation, National Pingtung University of Science and Technology, Neipu, Pingtung 91201, Taiwan;
- Department of Information Technology, Suratthani Rajabhat University, Suratthani 84100, Thailand
| | - Cheng-Fa Tsai
- Department of Management Information Systems, National Pingtung University of Science and Technology, Pingtung 91201, Taiwan
- Correspondence: ; Tel.: +886-08-770-3202 (ext. 7906)
| | - Chia-En Tsai
- Department of Biochemistry and Molecular Biology, National Cheng Kung University, Tainan 70101, Taiwan;
| | - Paohsi Wang
- Department of Food and Beverage Management, Cheng Shiu University, Kaohsiung 83347, Taiwan;
| |
Collapse
|