1
|
Gowda VB, Gopalakrishna MT, Megha J, Mohankumar S. Foreground segmentation network using transposed convolutional neural networks and up sampling for multiscale feature encoding. Neural Netw 2024; 170:167-175. [PMID: 37984043 DOI: 10.1016/j.neunet.2023.11.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 10/02/2023] [Accepted: 11/06/2023] [Indexed: 11/22/2023]
Abstract
Foreground segmentation algorithm aims to precisely separate moving objects from the background in various environments. However, the interference from darkness, dynamic background information, and camera jitter makes it still challenging to build a decent detection network. To solve these issues, a triplet CNN and Transposed Convolutional Neural Network (TCNN) are created by attaching a Features Pooling Module (FPM). TCNN process reduces the amount of multi-scale inputs to the network by fusing features into the Foreground Segmentation Network (FgSegNet) based FPM, which extracts multi-scale features from images and builds a strong feature pooling. Additionally, the up-sampling network is added to the proposed technique, which is used to up-sample the abstract image representation, so that its spatial dimensions match with the input image. The large context and long-range dependencies among pixels are acquired by TCNN and segmentation mask, in multiple scales using triplet CNN, to enhance the foreground segmentation of FgSegNet. The results, clearly show that FgSegNet surpasses other state-of-the-art algorithms on the CDnet2014 datasets, with an average F-Measure of 0.9804, precision of 0.9801, PWC as (0.0461), and recall as (0.9896). Moreover, the FgSegNet with up-sampling achieves the F-measure of 0.9804 which is higher when compared to the FgSegNet without up-sampling.
Collapse
Affiliation(s)
- Vishruth B Gowda
- Department of Computer Science and Engineering, SJB Institute of Technology, Bengaluru, Karnataka 560060, India; Visvesavaraya Technological University, Belgavi, Karnataka 590018, India.
| | - M T Gopalakrishna
- Visvesavaraya Technological University, Belgavi, Karnataka 590018, India; Department of Artificial Intelligence and Machine Learning, SJB Institute of Technology, Bengaluru, Karnataka 560060, India
| | - J Megha
- Department of Artificial Intelligence and Machine Learning, Ramaiah Institute of Technology, Bangalore 560054, India
| | - Shilpa Mohankumar
- Department of Information Science and Engineering, Bangalore Institute of Technology, Bengaluru, Karnataka 560060, India
| |
Collapse
|
2
|
Li S, Shi S, Fan Z, He X, Zhang N. Deep information-guided feature refinement network for colorectal gland segmentation. Int J Comput Assist Radiol Surg 2023; 18:2319-2328. [PMID: 36934367 DOI: 10.1007/s11548-023-02857-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 02/22/2023] [Indexed: 03/20/2023]
Abstract
PURPOSE Reliable quantification of colorectal histopathological images is based on the precise segmentation of glands but precise segmentation of glands is challenging as glandular morphology varies widely across histological grades, such as malignant glands and non-gland tissues are too similar to be identified, and tightly connected glands are even highly possibly to be incorrectly segmented as one gland. METHODS A deep information-guided feature refinement network is proposed to improve gland segmentation. Specifically, the backbone deepens the network structure to obtain effective features while maximizing the retained information, and a Multi-Scale Fusion module is proposed to increase the receptive field. In addition, to segment dense glands individually, a Multi-Scale Edge-Refined module is designed to strengthen the boundaries of glands. RESULTS The comparative experiments on the eight recently proposed deep learning methods demonstrated that our proposed network has better overall performance and is more competitive on Test B. The F1 score of Test A and Test B is 0.917 and 0.876, respectively; the object-level Dice is 0.921 and 0.884; and the object-level Hausdorff is 43.428 and 87.132, respectively. CONCLUSION The proposed colorectal gland segmentation network can effectively extract features with high representational ability and enhance edge features while retaining details to the maximum, dramatically improving the segmentation performance on malignant glands, and better segmentation results of multi-scale and closed glands can also be obtained.
Collapse
Affiliation(s)
- Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Shuling Shi
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Zhenbang Fan
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Ni Zhang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China.
| |
Collapse
|
3
|
Sun M, Wang J, Gong Q, Huang W. Enhancing gland segmentation in colon histology images using an instance-aware diffusion model. Comput Biol Med 2023; 166:107527. [PMID: 37778210 DOI: 10.1016/j.compbiomed.2023.107527] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 08/17/2023] [Accepted: 09/19/2023] [Indexed: 10/03/2023]
Abstract
In pathological image analysis, determination of gland morphology in histology images of the colon is essential to determine the grade of colon cancer. However, manual segmentation of glands is extremely challenging and there is a need to develop automatic methods for segmenting gland instances. Recently, due to the powerful noise-to-image denoising pipeline, the diffusion model has become one of the hot spots in computer vision research and has been explored in the field of image segmentation. In this paper, we propose an instance segmentation method based on the diffusion model that can perform automatic gland instance segmentation. Firstly, we model the instance segmentation process for colon histology images as a denoising process based on a diffusion model. Secondly, to recover details lost during denoising, we use Instance Aware Filters and multi-scale Mask Branch to construct global mask instead of predicting only local masks. Thirdly, to improve the distinction between the object and the background, we apply Conditional Encoding to enhance the intermediate features with the original image encoding. To objectively validate the proposed method, we compared several state-of-the-art deep learning models on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset (165 images), the Colorectal Adenocarcinoma Glands (CRAG) dataset (213 images) and the RINGS dataset (1500 images). Our proposed method obtains significantly improved results for CRAG (Object F1 0.853 ± 0.054, Object Dice 0.906 ± 0.043), GlaS Test A (Object F1 0.941 ± 0.039, Object Dice 0.939 ± 0.060), GlaS Test B (Object F1 0.893 ± 0.073, Object Dice 0.889 ± 0.069), and RINGS dataset (Precision 0.893 ± 0.096, Dice 0.904 ± 0.091). The experimental results show that our method significantly improves the segmentation accuracy, and the experiment results demonstrate the efficacy of the method.
Collapse
Affiliation(s)
- Mengxue Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Jiale Wang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Qingtao Gong
- Ulsan Ship and Ocean College, Ludong University, Yantai, 264025, China
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China.
| |
Collapse
|
4
|
Das R, Bose S, Chowdhury RS, Maulik U. Dense Dilated Multi-Scale Supervised Attention-Guided Network for histopathology image segmentation. Comput Biol Med 2023; 163:107182. [PMID: 37379615 DOI: 10.1016/j.compbiomed.2023.107182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/24/2023] [Accepted: 06/13/2023] [Indexed: 06/30/2023]
Abstract
Over the last couple of decades, the introduction and proliferation of whole-slide scanners led to increasing interest in the research of digital pathology. Although manual analysis of histopathological images is still the gold standard, the process is often tedious and time consuming. Furthermore, manual analysis also suffers from intra- and interobserver variability. Separating structures or grading morphological changes can be difficult due to architectural variability of these images. Deep learning techniques have shown great potential in histopathology image segmentation that drastically reduces the time needed for downstream tasks of analysis and providing accurate diagnosis. However, few algorithms have clinical implementations. In this paper, we propose a new deep learning model Dense Dilated Multiscale Supervised Attention-Guided (D2MSA) Network for histopathology image segmentation that makes use of deep supervision coupled with a hierarchical system of novel attention mechanisms. The proposed model surpasses state-of-the-art performance while using similar computational resources. The performance of the model has been evaluated for the tasks of gland segmentation and nuclei instance segmentation, both of which are clinically relevant tasks to assess the state and progress of malignancy. Here, we have used histopathology image datasets for three different types of cancer. We have also performed extensive ablation tests and hyperparameter tuning to ensure the validity and reproducibility of the model performance. The proposed model is available at www.github.com/shirshabose/D2MSA-Net.
Collapse
Affiliation(s)
- Rangan Das
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Shirsha Bose
- Department of Informatics, Technical University of Munich, Munich, Bavaria 85748, Germany.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
5
|
Deng R, Liu Q, Cui C, Yao T, Long J, Asad Z, Womick RM, Zhu Z, Fogo AB, Zhao S, Yang H, Huo Y. Omni-Seg: A Scale-Aware Dynamic Network for Renal Pathological Image Segmentation. IEEE Trans Biomed Eng 2023; 70:2636-2644. [PMID: 37030838 PMCID: PMC10517077 DOI: 10.1109/tbme.2023.3260739] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2023]
Abstract
Comprehensive semantic segmentation on renal pathological images is challenging due to the heterogeneous scales of the objects. For example, on a whole slide image (WSI), the cross-sectional areas of glomeruli can be 64 times larger than that of the peritubular capillaries, making it impractical to segment both objects on the same patch, at the same scale. To handle this scaling issue, prior studies have typically trained multiple segmentation networks in order to match the optimal pixel resolution of heterogeneous tissue types. This multi-network solution is resource-intensive and fails to model the spatial relationship between tissue types. In this article, we propose the Omni-Seg network, a scale-aware dynamic neural network that achieves multi-object (six tissue types) and multi-scale (5× to 40× scale) pathological image segmentation via a single neural network. The contribution of this article is three-fold: (1) a novel scale-aware controller is proposed to generalize the dynamic neural network from single-scale to multi-scale; (2) semi-supervised consistency regularization of pseudo-labels is introduced to model the inter-scale correlation of unannotated tissue types into a single end-to-end learning paradigm; and (3) superior scale-aware generalization is evidenced by directly applying a model trained on human kidney images to mouse kidney images, without retraining. By learning from 150,000 human pathological image patches from six tissue types at three different resolutions, our approach achieved superior segmentation performance according to human visual assessment and evaluation of image-omics (i.e., spatial transcriptomics).
Collapse
|
6
|
Meng Z, Wang G, Su F, Liu Y, Wang Y, Yang J, Luo J, Cao F, Zhen P, Huang B, Yin Y, Zhao Z, Guo L. A Deep Learning-Based System Trained for Gastrointestinal Stromal Tumor Screening Can Identify Multiple Types of Soft Tissue Tumors. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:899-912. [PMID: 37068638 DOI: 10.1016/j.ajpath.2023.03.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/26/2023] [Accepted: 03/28/2023] [Indexed: 04/19/2023]
Abstract
The accuracy and timeliness of the pathologic diagnosis of soft tissue tumors (STTs) critically affect treatment decision and patient prognosis. Thus, it is crucial to make a preliminary judgement on whether the tumor is benign or malignant with hematoxylin and eosin-stained images. A deep learning-based system, Soft Tissue Tumor Box (STT-BOX), is presented herein, with only hematoxylin and eosin images for malignant STT identification from benign STTs with histopathologic similarity. STT-BOX assumed gastrointestinal stromal tumor as a baseline for malignant STT evaluation, and distinguished gastrointestinal stromal tumor from leiomyoma and schwannoma with 100% area under the curve in patients from three hospitals, which achieved higher accuracy than the interpretation of experienced pathologists. Particularly, this system performed well on six common types of malignant STTs from The Cancer Genome Atlas data set, accurately highlighting the malignant mass lesion. STT-BOX was able to distinguish ovarian malignant sex-cord stromal tumors without any fine-tuning. This study included mesenchymal tumors that originated from the digestive system, bone and soft tissues, and reproductive system, where the high accuracy of migration verification may reveal the morphologic similarity of the nine types of malignant tumors. Further evaluation in a pan-STT setting would be potential and prospective, obviating the overuse of immunohistochemistry and molecular tests, and providing a practical basis for clinical treatment selection in a timely manner.
Collapse
Affiliation(s)
- Zhu Meng
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Guangxi Wang
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Fei Su
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China; Beijing Key Laboratory of Network System and Network Culture, Beijing, China
| | - Yan Liu
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Yuxiang Wang
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Jing Yang
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Jianyuan Luo
- Department of Medical Genetics, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Fang Cao
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Panpan Zhen
- Department of Pathology, Beijing Luhe Hospital, Capital Medical University, Beijing, China
| | - Binhua Huang
- Department of Pathology, Dongguan Houjie Hospital, Dongguan, China
| | - Yuxin Yin
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Zhicheng Zhao
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China; Beijing Key Laboratory of Network System and Network Culture, Beijing, China.
| | - Limei Guo
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China.
| |
Collapse
|
7
|
Dabass M, Dabass J. An Atrous Convolved Hybrid Seg-Net Model with residual and attention mechanism for gland detection and segmentation in histopathological images. Comput Biol Med 2023; 155:106690. [PMID: 36827788 DOI: 10.1016/j.compbiomed.2023.106690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/06/2023] [Accepted: 02/14/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE A clinically compatible computerized segmentation model is presented here that aspires to supply clinical gland informative details by seizing every small and intricate variation in medical images, integrate second opinions, and reduce human errors. APPROACH It comprises of enhanced learning capability that extracts denser multi-scale gland-specific features, recover semantic gap during concatenation, and effectively handle resolution-degradation and vanishing gradient problems. It is having three proposed modules namely Atrous Convolved Residual Learning Module in the encoder as well as decoder, Residual Attention Module in the skip connection paths, and Atrous Convolved Transitional Module as the transitional and output layer. Also, pre-processing techniques like patch-sampling, stain-normalization, augmentation, etc. are employed to develop its generalization capability. To verify its robustness and invigorate network invariance against digital variability, extensive experiments are carried out employing three different public datasets i.e., GlaS (Gland Segmentation Challenge), CRAG (Colorectal Adenocarcinoma Gland) and LC-25000 (Lung Colon-25000) dataset and a private HosC (Hospital Colon) dataset. RESULTS The presented model accomplished combative gland detection outcomes having F1-score (GlaS(Test A(0.957), Test B(0.926)), CRAG(0.935), LC 25000(0.922), HosC(0.963)); and gland segmentation results having Object-Dice Index (GlaS(Test A(0.961), Test B(0.933)), CRAG(0.961), LC-25000(0.940), HosC(0.929)), and Object-Hausdorff Distance (GlaS(Test A(21.77) and Test B(69.74)), CRAG(87.63), LC-25000(95.85), HosC(83.29)). In addition, validation score (GlaS (Test A(0.945), Test B(0.937)), CRAG(0.934), LC-25000(0.911), HosC(0.928)) supplied by the proficient pathologists is integrated for the end segmentation results to corroborate the applicability and appropriateness for assistance at the clinical level applications. CONCLUSION The proposed system will assist pathologists in devising precise diagnoses by offering a referential perspective during morphology assessment of colon histopathology images.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, India.
| | - Jyoti Dabass
- DBT Centre of Excellence Biopharmaceutical Technology, IIT, Delhi, India
| |
Collapse
|
8
|
A digital pathology workflow for the segmentation and classification of gastric glands: Study of gastric atrophy and intestinal metaplasia cases. PLoS One 2022; 17:e0275232. [PMID: 36584163 PMCID: PMC9803139 DOI: 10.1371/journal.pone.0275232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 09/12/2022] [Indexed: 01/01/2023] Open
Abstract
Gastric cancer is one of the most frequent causes of cancer-related deaths worldwide. Gastric atrophy (GA) and gastric intestinal metaplasia (IM) of the mucosa of the stomach have been found to increase the risk of gastric cancer and are considered precancerous lesions. Therefore, the early detection of GA and IM may have a valuable role in histopathological risk assessment. However, GA and IM are difficult to confirm endoscopically and, following the Sydney protocol, their diagnosis depends on the analysis of glandular morphology and on the identification of at least one well-defined goblet cell in a set of hematoxylin and eosin (H&E) -stained biopsy samples. To this end, the precise segmentation and classification of glands from the histological images plays an important role in the diagnostic confirmation of GA and IM. In this paper, we propose a digital pathology end-to-end workflow for gastric gland segmentation and classification for the analysis of gastric tissues. The proposed GAGL-VTNet, initially, extracts both global and local features combining multi-scale feature maps for the segmentation of glands and, subsequently, it adopts a vision transformer that exploits the visual dependences of the segmented glands towards their classification. For the analysis of gastric tissues, segmentation of mucosa is performed through an unsupervised model combining energy minimization and a U-Net model. Then, features of the segmented glands and mucosa are extracted and analyzed. To evaluate the efficiency of the proposed methodology we created the GAGL dataset consisting of 85 WSI, collected from 20 patients. The results demonstrate the existence of significant differences of the extracted features between normal, GA and IM cases. The proposed approach for gland and mucosa segmentation achieves an object dice score equal to 0.908 and 0.967 respectively, while for the classification of glands it achieves an F1 score equal to 0.94 showing great potential for the automated quantification and analysis of gastric biopsies.
Collapse
|
9
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
10
|
Tharwat M, Sakr NA, El-Sappagh S, Soliman H, Kwak KS, Elmogy M. Colon Cancer Diagnosis Based on Machine Learning and Deep Learning: Modalities and Analysis Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:9250. [PMID: 36501951 PMCID: PMC9739266 DOI: 10.3390/s22239250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
The treatment and diagnosis of colon cancer are considered to be social and economic challenges due to the high mortality rates. Every year, around the world, almost half a million people contract cancer, including colon cancer. Determining the grade of colon cancer mainly depends on analyzing the gland's structure by tissue region, which has led to the existence of various tests for screening that can be utilized to investigate polyp images and colorectal cancer. This article presents a comprehensive survey on the diagnosis of colon cancer. This covers many aspects related to colon cancer, such as its symptoms and grades as well as the available imaging modalities (particularly, histopathology images used for analysis) in addition to common diagnosis systems. Furthermore, the most widely used datasets and performance evaluation metrics are discussed. We provide a comprehensive review of the current studies on colon cancer, classified into deep-learning (DL) and machine-learning (ML) techniques, and we identify their main strengths and limitations. These techniques provide extensive support for identifying the early stages of cancer that lead to early treatment of the disease and produce a lower mortality rate compared with the rate produced after symptoms develop. In addition, these methods can help to prevent colorectal cancer from progressing through the removal of pre-malignant polyps, which can be achieved using screening tests to make the disease easier to diagnose. Finally, the existing challenges and future research directions that open the way for future work in this field are presented.
Collapse
Affiliation(s)
- Mai Tharwat
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Nehal A. Sakr
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Shaker El-Sappagh
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13512, Egypt
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Hassan Soliman
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Kyung-Sup Kwak
- Department of Information and Communication Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
11
|
Dabass M, Vashisth S, Vig R. MTU: A multi-tasking U-net with hybrid convolutional learning and attention modules for cancer classification and gland Segmentation in Colon Histopathological Images. Comput Biol Med 2022; 150:106095. [PMID: 36179516 DOI: 10.1016/j.compbiomed.2022.106095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/31/2022] [Accepted: 09/10/2022] [Indexed: 11/17/2022]
Abstract
A clinically comparable multi-tasking computerized deep U-Net-based model is demonstrated in this paper. It intends to offer clinical gland morphometric information and cancer grade classification to be provided as referential opinions for pathologists in order to abate human errors. It embraces enhanced feature learning capability that aids in extraction of potent multi-scale features; efficacious semantic gap recovery during feature concatenation; and successful interception of resolution-degradation and vanishing gradient problems while performing moderate computations. It is proposed by integrating three unique novel structural components namely Hybrid Convolutional Learning Units in the encoder and decoder, Attention Learning Units in skip connection, and Multi-Scalar Dilated Transitional Unit as the transitional layer in the traditional U-Net architecture. These units are composed of the amalgamated phenomenon of multi-level convolutional learning through conventional, atrous, residual, depth-wise, and point-wise convolutions which are further incorporated with target-specific attention learning and enlarged effectual receptive field size. Also, pre-processing techniques of patch-sampling, augmentation (color and morphological), stain-normalization, etc. are employed to burgeon its generalizability. To build network invariance towards digital variability, exhaustive experiments are conducted using three public datasets (Colorectal Adenocarcinoma Gland (CRAG), Gland Segmentation (GlaS) challenge, and Lung Colon-25000 (LC-25K) dataset)) and then its robustness is verified using an in-house private dataset of Hospital Colon (HosC). For the cancer classification, the proposed model achieved results of Accuracy (CRAG(95%), GlaS(97.5%), LC-25K(99.97%), HosC(99.45%)), Precision (CRAG(0.9678), GlaS(0.9768), LC-25K(1), HosC(1)), F1-score (CRAG(0.968), GlaS(0.977), LC 25K(0.9997), HosC(0.9965)), and Recall (CRAG(0.9677), GlaS(0.9767), LC-25K(0.9994), HosC(0.9931)). For the gland detection and segmentation, the proposed model achieved competitive results of F1-score (CRAG(0.924), GlaS(Test A(0.949), Test B(0.918)), LC-25K(0.916), HosC(0.959)); Object-Dice Index (CRAG(0.959), GlaS(Test A(0.956), Test B(0.909)), LC-25K(0.929), HosC(0.922)), and Object-Hausdorff Distance (CRAG(90.47), GlaS(Test A(23.17), Test B(71.53)), LC-25K(96.28), HosC(85.45)). In addition, the activation mappings for testing the interpretability of the classification decision-making process are reported by utilizing techniques of Local Interpretable Model-Agnostic Explanations, Occlusion Sensitivity, and Gradient-Weighted Class Activation Mappings. This is done to provide further evidence about the model's self-learning capability of the comparable patterns considered relevant by pathologists without any pre-requisite for annotations. These activation mapping visualization outcomes are evaluated by proficient pathologists, and they delivered these images with a class-path validation score of (CRAG(9.31), GlaS(9.25), LC-25K(9.05), and HosC(9.85)). Furthermore, the seg-path validation score of (GlaS (Test A(9.40), Test B(9.25)), CRAG(9.27), LC-25K(9.01), HosC(9.19)) given by multiple pathologists is included for the final segmented outcomes to substantiate the clinical relevance and suitability for facilitation at the clinical level. The proposed model will aid pathologists to formulate an accurate diagnosis by providing a referential opinion during the morphology assessment of histopathology images. It will reduce unintentional human error in cancer diagnosis and consequently will enhance patient survival rate.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, 122017, India.
| | - Sharda Vashisth
- EECE Deptt, The NorthCap University, Gurugram, 122017, India
| | - Rekha Vig
- EECE Deptt, The NorthCap University, Gurugram, 122017, India
| |
Collapse
|
12
|
Deep Neural Network Models for Colon Cancer Screening. Cancers (Basel) 2022; 14:cancers14153707. [PMID: 35954370 PMCID: PMC9367621 DOI: 10.3390/cancers14153707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/26/2022] [Accepted: 07/27/2022] [Indexed: 12/24/2022] Open
Abstract
Simple Summary Deep learning models have been shown to achieve high performance in diagnosing colon cancer compared to conventional image processing and hand-crafted machine learning methods. Hence, several studies have focused on developing hybrid learning, end-to-end, and transfer learning techniques to reduce manual interaction and for labelling the regions of interest. However, these weak learning techniques do not always provide a clear diagnosis. Therefore, it is necessary to develop a clear explainable learning method that can highlight factors and form the basis of clinical decisions. However, there has been little research carried out employing such transparent approaches. This study discussed the aforementioned models for colon cancer diagnosis. Abstract Early detection of colorectal cancer can significantly facilitate clinicians’ decision-making and reduce their workload. This can be achieved using automatic systems with endoscopic and histological images. Recently, the success of deep learning has motivated the development of image- and video-based polyp identification and segmentation. Currently, most diagnostic colonoscopy rooms utilize artificial intelligence methods that are considered to perform well in predicting invasive cancer. Convolutional neural network-based architectures, together with image patches and preprocesses are often widely used. Furthermore, learning transfer and end-to-end learning techniques have been adopted for detection and localization tasks, which improve accuracy and reduce user dependence with limited datasets. However, explainable deep networks that provide transparency, interpretability, reliability, and fairness in clinical diagnostics are preferred. In this review, we summarize the latest advances in such models, with or without transparency, for the prediction of colorectal cancer and also address the knowledge gap in the upcoming technology.
Collapse
|
13
|
A Novel Method Based on GAN Using a Segmentation Module for Oligodendroglioma Pathological Image Generation. SENSORS 2022; 22:s22103960. [PMID: 35632368 PMCID: PMC9144585 DOI: 10.3390/s22103960] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 04/22/2022] [Accepted: 05/20/2022] [Indexed: 02/05/2023]
Abstract
Digital pathology analysis using deep learning has been the subject of several studies. As with other medical data, pathological data are not easily obtained. Because deep learning-based image analysis requires large amounts of data, augmentation techniques are used to increase the size of pathological datasets. This study proposes a novel method for synthesizing brain tumor pathology data using a generative model. For image synthesis, we used embedding features extracted from a segmentation module in a general generative model. We also introduce a simple solution for training a segmentation model in an environment in which the masked label of the training dataset is not supplied. As a result of this experiment, the proposed method did not make great progress in quantitative metrics but showed improved results in the confusion rate of more than 70 subjects and the quality of the visual output.
Collapse
|
14
|
Wen Y, Chen L, Deng Y, Zhang Z, Zhou C. Pixel-wise triplet learning for enhancing boundary discrimination in medical image segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108424] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
15
|
Qiu H, Ding S, Liu J, Wang L, Wang X. Applications of Artificial Intelligence in Screening, Diagnosis, Treatment, and Prognosis of Colorectal Cancer. Curr Oncol 2022; 29:1773-1795. [PMID: 35323346 PMCID: PMC8947571 DOI: 10.3390/curroncol29030146] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 02/28/2022] [Accepted: 03/03/2022] [Indexed: 12/29/2022] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers worldwide. Accurate early detection and diagnosis, comprehensive assessment of treatment response, and precise prediction of prognosis are essential to improve the patients’ survival rate. In recent years, due to the explosion of clinical and omics data, and groundbreaking research in machine learning, artificial intelligence (AI) has shown a great application potential in clinical field of CRC, providing new auxiliary approaches for clinicians to identify high-risk patients, select precise and personalized treatment plans, as well as to predict prognoses. This review comprehensively analyzes and summarizes the research progress and clinical application value of AI technologies in CRC screening, diagnosis, treatment, and prognosis, demonstrating the current status of the AI in the main clinical stages. The limitations, challenges, and future perspectives in the clinical implementation of AI are also discussed.
Collapse
Affiliation(s)
- Hang Qiu
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, China;
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
- Correspondence: (H.Q.); (X.W.)
| | - Shuhan Ding
- School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853, USA;
| | - Jianbo Liu
- West China School of Medicine, Sichuan University, Chengdu 610041, China;
- Department of Gastrointestinal Surgery, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Liya Wang
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, China;
| | - Xiaodong Wang
- West China School of Medicine, Sichuan University, Chengdu 610041, China;
- Department of Gastrointestinal Surgery, West China Hospital, Sichuan University, Chengdu 610041, China
- Correspondence: (H.Q.); (X.W.)
| |
Collapse
|
16
|
Wang H, Xian M, Vakanski A. TA-Net: Topology-Aware Network for Gland Segmentation. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2022; 2022:3241-3249. [PMID: 35509894 PMCID: PMC9063467 DOI: 10.1109/wacv51458.2022.00330] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Gland segmentation is a critical step to quantitatively assess the morphology of glands in histopathology image analysis. However, it is challenging to separate densely clustered glands accurately. Existing deep learning-based approaches attempted to use contour-based techniques to alleviate this issue but only achieved limited success. To address this challenge, we propose a novel topology-aware network (TA-Net) to accurately separate densely clustered and severely deformed glands. The proposed TA-Net has a multitask learning architecture and enhances the generalization of gland segmentation by learning shared representation from two tasks: instance segmentation and gland topology estimation. The proposed topology loss computes gland topology using gland skeletons and markers. It drives the network to generate segmentation results that comply with the true gland topology. We validate the proposed approach on the GlaS and CRAG datasets using three quantitative metrics, F1-score, object-level Dice coefficient, and object-level Hausdorff distance. Extensive experiments demonstrate that TA-Net achieves state-of-the-art performance on the two datasets. TA-Net outperforms other approaches in the presence of densely clustered glands.
Collapse
|
17
|
Ding H, Cen Q, Si X, Pan Z, Chen X. Automatic glottis segmentation for laryngeal endoscopic images based on U-Net. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103116] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
18
|
SAFRON: Stitching Across the Frontier Network for Generating Colorectal Cancer Histology Images. Med Image Anal 2021; 77:102337. [PMID: 35016078 DOI: 10.1016/j.media.2021.102337] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 10/13/2021] [Accepted: 12/14/2021] [Indexed: 12/12/2022]
Abstract
Automated synthesis of histology images has several potential applications including the development of data-efficient deep learning algorithms. In the field of computational pathology, where histology images are large in size and visual context is crucial, synthesis of large high-resolution images via generative modeling is an important but challenging task due to memory and computational constraints. To address this challenge, we propose a novel framework called SAFRON (Stitching Across the FROntier Network) to construct realistic, large high-resolution tissue images conditioned on input tissue component masks. The main novelty in the framework is integration of stitching in its loss function which enables generation of images of arbitrarily large sizes after training on relatively small image patches while preserving morphological features with minimal boundary artifacts. We have used the proposed framework for generating, to the best of our knowledge, the largest-sized synthetic histology images to date (up to 11K×8K pixels). Compared to existing approaches, our framework is efficient in terms of the memory required for training and computations needed for synthesizing large high-resolution images. The quality of generated images was assessed quantitatively using Frechet Inception Distance as well as by 7 trained pathologists, who assigned a realism score to a set of images generated by SAFRON. The average realism score across all pathologists for synthetic images was as high as that of real images. We also show that training with additional synthetic data generated by SAFRON can significantly boost prediction performance of gland segmentation and cancer detection algorithms in colorectal cancer histology images.
Collapse
|
19
|
Zhang J, Zhang Y, Qiu H, Xie W, Yao Z, Yuan H, Jia Q, Wang T, Shi Y, Huang M, Zhuang J, Xu X. Pyramid-Net: Intra-layer Pyramid-Scale Feature Aggregation Network for Retinal Vessel Segmentation. Front Med (Lausanne) 2021; 8:761050. [PMID: 34950679 PMCID: PMC8688400 DOI: 10.3389/fmed.2021.761050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 11/05/2021] [Indexed: 11/18/2022] Open
Abstract
Retinal vessel segmentation plays an important role in the diagnosis of eye-related diseases and biomarkers discovery. Existing works perform multi-scale feature aggregation in an inter-layer manner, namely inter-layer feature aggregation. However, such an approach only fuses features at either a lower scale or a higher scale, which may result in a limited segmentation performance, especially on thin vessels. This discovery motivates us to fuse multi-scale features in each layer, intra-layer feature aggregation, to mitigate the problem. Therefore, in this paper, we propose Pyramid-Net for accurate retinal vessel segmentation, which features intra-layer pyramid-scale aggregation blocks (IPABs). At each layer, IPABs generate two associated branches at a higher scale and a lower scale, respectively, and the two with the main branch at the current scale operate in a pyramid-scale manner. Three further enhancements including pyramid inputs enhancement, deep pyramid supervision, and pyramid skip connections are proposed to boost the performance. We have evaluated Pyramid-Net on three public retinal fundus photography datasets (DRIVE, STARE, and CHASE-DB1). The experimental results show that Pyramid-Net can effectively improve the segmentation performance especially on thin vessels, and outperforms the current state-of-the-art methods on all the adopted three datasets. In addition, our method is more efficient than existing methods with a large reduction in computational cost. We have released the source code at https://github.com/JerRuy/Pyramid-Net.
Collapse
Affiliation(s)
- Jiawei Zhang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
- Shanghai key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
| | - Yanchun Zhang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, China
- College of Engineering and Science, Victoria University, Melbourne, VIC, Australia
| | - Hailong Qiu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wen Xie
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Zeyang Yao
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Haiyun Yuan
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Qianjun Jia
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Tianchen Wang
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Meiping Huang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Jian Zhuang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xiaowei Xu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
20
|
Yang SD, Zhao YQ, Zhang F, Liao M, Yang Z, Wang YJ, Yu LL. An efficient two-step multi-organ registration on abdominal CT via deep-learning based segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
21
|
Deep connected attention (DCA) ResNet for robust voice pathology detection and classification. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102973] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
22
|
Aatresh AA, Yatgiri RP, Chanchal AK, Kumar A, Ravi A, Das D, Bs R, Lal S, Kini J. Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images. Comput Med Imaging Graph 2021; 93:101975. [PMID: 34461375 DOI: 10.1016/j.compmedimag.2021.101975] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 08/05/2021] [Accepted: 08/19/2021] [Indexed: 11/30/2022]
Abstract
Image segmentation remains to be one of the most vital tasks in the area of computer vision and more so in the case of medical image processing. Image segmentation quality is the main metric that is often considered with memory and computation efficiency overlooked, limiting the use of power hungry models for practical use. In this paper, we propose a novel framework (Kidney-SegNet) that combines the effectiveness of an attention based encoder-decoder architecture with atrous spatial pyramid pooling with highly efficient dimension-wise convolutions. The segmentation results of the proposed Kidney-SegNet architecture have been shown to outperform existing state-of-the-art deep learning methods by evaluating them on two publicly available kidney and TNBC breast H&E stained histopathology image datasets. Further, our simulation experiments also reveal that the computational complexity and memory requirement of our proposed architecture is very efficient compared to existing deep learning state-of-the-art methods for the task of nuclei segmentation of H&E stained histopathology images. The source code of our implementation will be available at https://github.com/Aaatresh/Kidney-SegNet.
Collapse
Affiliation(s)
- Anirudh Ashok Aatresh
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Rohit Prashant Yatgiri
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Amit Kumar Chanchal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Aman Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Akansh Ravi
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Devikalyan Das
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Raghavendra Bs
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Shyam Lal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Jyoti Kini
- Department of Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
23
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 82] [Impact Index Per Article: 27.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
24
|
Dabass M, Vashisth S, Vig R. Attention-Guided deep atrous-residual U-Net architecture for automated gland segmentation in colon histopathology images. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100784] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
|
25
|
Liu J, Chao F, Lin CM, Zhou C, Shang C. DK-CNNs: Dynamic kernel convolutional neural networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.09.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
26
|
Guo Z, Zheng H, Xu X, Ju J, Zheng Z, You C, Gu Y. Quality grading of jujubes using composite convolutional neural networks in combination with
RGB
color space segmentation and deep convolutional generative adversarial networks. J FOOD PROCESS ENG 2020. [DOI: 10.1111/jfpe.13620] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Affiliation(s)
- Zhongyuan Guo
- School of Electronic Information, Wuhan University Wuhan China
| | - Hong Zheng
- School of Electronic Information, Wuhan University Wuhan China
| | - Xiaohang Xu
- School of Electronic Information, Wuhan University Wuhan China
| | - Jianping Ju
- School of Electronic Information, Wuhan University Wuhan China
| | - Zhaohui Zheng
- School of Electronic Information, Wuhan University Wuhan China
- School of Mathematics and Physics, Wuhan Institute of Technology Wuhan China
| | - Changhui You
- School of Electronic Information, Wuhan University Wuhan China
| | - Yu Gu
- School of Electronic Information, Wuhan University Wuhan China
| |
Collapse
|
27
|
Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends. MATHEMATICS 2020. [DOI: 10.3390/math8111863] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue status. These images are an essential resource with which to define biological compositions or analyze cell and tissue structures. This imaging modality is very important for diagnostic applications. The analysis of histopathology images is a prolific and relevant research area supporting disease diagnosis. In this paper, the challenges of histopathology image analysis are evaluated. An extensive review of conventional and deep learning techniques which have been applied in histological image analyses is presented. This review summarizes many current datasets and highlights important challenges and constraints with recent deep learning techniques, alongside possible future research avenues. Despite the progress made in this research area so far, it is still a significant area of open research because of the variety of imaging techniques and disease-specific characteristics.
Collapse
|
28
|
Thakur N, Yoon H, Chong Y. Current Trends of Artificial Intelligence for Colorectal Cancer Pathology Image Analysis: A Systematic Review. Cancers (Basel) 2020; 12:E1884. [PMID: 32668721 PMCID: PMC7408874 DOI: 10.3390/cancers12071884] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 07/06/2020] [Accepted: 07/09/2020] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers requiring early pathologic diagnosis using colonoscopy biopsy samples. Recently, artificial intelligence (AI) has made significant progress and shown promising results in the field of medicine despite several limitations. We performed a systematic review of AI use in CRC pathology image analysis to visualize the state-of-the-art. Studies published between January 2000 and January 2020 were searched in major online databases including MEDLINE (PubMed, Cochrane Library, and EMBASE). Query terms included "colorectal neoplasm," "histology," and "artificial intelligence." Of 9000 identified studies, only 30 studies consisting of 40 models were selected for review. The algorithm features of the models were gland segmentation (n = 25, 62%), tumor classification (n = 8, 20%), tumor microenvironment characterization (n = 4, 10%), and prognosis prediction (n = 3, 8%). Only 20 gland segmentation models met the criteria for quantitative analysis, and the model proposed by Ding et al. (2019) performed the best. Studies with other features were in the elementary stage, although most showed impressive results. Overall, the state-of-the-art is promising for CRC pathological analysis. However, datasets in most studies had relatively limited scale and quality for clinical application of this technique. Future studies with larger datasets and high-quality annotations are required for routine practice-level validation.
Collapse
Affiliation(s)
- Nishant Thakur
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Korea;
| | - Hongjun Yoon
- AI Lab, Deepnoid, #1305 E&C Venture Dream Tower 2, 55, Digital-ro 33-Gil, Guro-gu, Seoul 06216, Korea;
| | - Yosep Chong
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Korea;
| |
Collapse
|
29
|
Shuai L, Yuanning L, Xiaodong Z, Guang H, Zukang W, Xinlong L, Chaoqun W, Jingwei C. Heterogeneous Iris One-to-One Certification with Universal Sensors based On Quality Fuzzy Inference and Multi-Feature Fusion Lightweight Neural Network. SENSORS (BASEL, SWITZERLAND) 2020; 20:E1785. [PMID: 32210211 PMCID: PMC7146378 DOI: 10.3390/s20061785] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 03/18/2020] [Accepted: 03/21/2020] [Indexed: 11/17/2022]
Abstract
Due to the unsteady morphology of heterogeneous irises generated by a variety of different devices and environments, the traditional processing methods of statistical learning or cognitive learning for a single iris source are not effective. Traditional iris recognition divides the whole process into several statistically guided steps, which cannot solve the problem of correlation between various links. The existing iris data set size and situational classification constraints make it difficult to meet the requirements of learning methods under a single deep learning framework. Therefore, aiming at a one-to-one iris certification scenario, this paper proposes a heterogeneous iris one-to-one certification method with universal sensors based on quality fuzzy inference and a multi-feature entropy fusion lightweight neural network. The method is divided into an evaluation module and a certification module. The evaluation module can be used by different devices to design a quality fuzzy concept inference system and an iris quality knowledge concept construction mechanism, transform human logical cognition concepts into digital concepts, and select appropriate concepts to determine iris quality according to different iris quality requirements and get a recognizable iris. The certification module is a lightweight neural network based on statistical learning ideas and a multi-source feature fusion mechanism. The information entropy of the iris feature label was used to set the iris entropy feature category label and design certification module functions according to the category label to obtain the certification module result. As the requirements for the number and quality of irises changes, the category labels in the certification module function were dynamically adjusted using a feedback learning mechanism. This paper uses iris data collected from three different sensors in the JLU(Jilin University) iris library. The experimental results prove that for the lightweight multi-state irises, the abovementioned problems are ameliorated to a certain extent by this method.
Collapse
Affiliation(s)
- Liu Shuai
- College of Computer Science and Technology, Jilin University, Changchun 130012, China; (L.S.); (L.Y.); (W.Z.)
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China; (L.X.); (W.C.); (C.J.)
| | - Liu Yuanning
- College of Computer Science and Technology, Jilin University, Changchun 130012, China; (L.S.); (L.Y.); (W.Z.)
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China; (L.X.); (W.C.); (C.J.)
| | - Zhu Xiaodong
- College of Computer Science and Technology, Jilin University, Changchun 130012, China; (L.S.); (L.Y.); (W.Z.)
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China; (L.X.); (W.C.); (C.J.)
| | - Huo Guang
- College of Computer Science, Northeast Electric Power University, Jilin 132012, China;
| | - Wu Zukang
- College of Computer Science and Technology, Jilin University, Changchun 130012, China; (L.S.); (L.Y.); (W.Z.)
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China; (L.X.); (W.C.); (C.J.)
| | - Li Xinlong
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China; (L.X.); (W.C.); (C.J.)
- College of Software, Jilin University, Changchun 130012, China
| | - Wang Chaoqun
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China; (L.X.); (W.C.); (C.J.)
- College of Software, Jilin University, Changchun 130012, China
| | - Cui Jingwei
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China; (L.X.); (W.C.); (C.J.)
- College of Software, Jilin University, Changchun 130012, China
| |
Collapse
|