1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Zhang S, Yuan Z, Zhou X, Wang H, Chen B, Wang Y. VENet: Variational energy network for gland segmentation of pathological images and early gastric cancer diagnosis of whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108178. [PMID: 38652995 DOI: 10.1016/j.cmpb.2024.108178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 04/08/2024] [Accepted: 04/13/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND AND OBJECTIVE Gland segmentation of pathological images is an essential but challenging step for adenocarcinoma diagnosis. Although deep learning methods have recently made tremendous progress in gland segmentation, they have not given satisfactory boundary and region segmentation results of adjacent glands. These glands usually have a large difference in glandular appearance, and the statistical distribution between the training and test sets in deep learning is inconsistent. These problems make networks not generalize well in the test dataset, bringing difficulties to gland segmentation and early cancer diagnosis. METHODS To address these problems, we propose a Variational Energy Network named VENet with a traditional variational energy Lv loss for gland segmentation of pathological images and early gastric cancer detection in whole slide images (WSIs). It effectively integrates the variational mathematical model and the data-adaptability of deep learning methods to balance boundary and region segmentation. Furthermore, it can effectively segment and classify glands in large-size WSIs with reliable nucleus width and nucleus-to-cytoplasm ratio features. RESULTS The VENet was evaluated on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset, the Colorectal Adenocarcinoma Glands (CRAG) dataset, and the self-collected Nanfang Hospital dataset. Compared with state-of-the-art methods, our method achieved excellent performance for GlaS Test A (object dice 0.9562, object F1 0.9271, object Hausdorff distance 73.13), GlaS Test B (object dice 94.95, object F1 95.60, object Hausdorff distance 59.63), and CRAG (object dice 95.08, object F1 92.94, object Hausdorff distance 28.01). For the Nanfang Hospital dataset, our method achieved a kappa of 0.78, an accuracy of 0.9, a sensitivity of 0.98, and a specificity of 0.80 on the classification task of test 69 WSIs. CONCLUSIONS The experimental results show that the proposed model accurately predicts boundaries and outperforms state-of-the-art methods. It can be applied to the early diagnosis of gastric cancer by detecting regions of high-grade gastric intraepithelial neoplasia in WSI, which can assist pathologists in analyzing large WSI and making accurate diagnostic decisions.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Academy of Military Sciences of the People's Liberation Army, Beijing, China.
| | - Xianchen Zhou
- Department of Mathematics, National University of Defense Technology, Changsha, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
3
|
Zhou C, Ye L, Peng H, Liu Z, Wang J, Ramírez-De-Arellano A. A Parallel Convolutional Network Based on Spiking Neural Systems. Int J Neural Syst 2024; 34:2450022. [PMID: 38487872 DOI: 10.1142/s0129065724500229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Deep convolutional neural networks have shown advanced performance in accurately segmenting images. In this paper, an SNP-like convolutional neuron structure is introduced, abstracted from the nonlinear mechanism in nonlinear spiking neural P (NSNP) systems. Then, a U-shaped convolutional neural network named SNP-like parallel-convolutional network, or SPC-Net, is constructed for segmentation tasks. The dual-convolution concatenate (DCC) and dual-convolution addition (DCA) network blocks are designed, respectively, in the encoder and decoder stages. The two blocks employ parallel convolution with different kernel sizes to improve feature representation ability and make full use of spatial detail information. Meanwhile, different feature fusion strategies are used to fuse their features to achieve feature complementarity and augmentation. Furthermore, a dual-scale pooling (DSP) module in the bottleneck is designed to improve the feature extraction capability, which can extract multi-scale contextual information and reduce information loss while extracting salient features. The SPC-Net is applied in medical image segmentation tasks and is compared with several recent segmentation methods on the GlaS and CRAG datasets. The proposed SPC-Net achieves 90.77% DICE coefficient, 83.76% IoU score and 83.93% F1 score, 86.33% ObjDice coefficient, 135.60 Obj-Hausdorff distance, respectively. The experimental results show that the proposed model can achieve good segmentation performance.
Collapse
Affiliation(s)
- Chi Zhou
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Lulin Ye
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Hong Peng
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Zhicai Liu
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Jun Wang
- School of Electrical Engineering and Electronic Information, Xihua University, Chengdu 610039, P. R. China
| | - Antonio Ramírez-De-Arellano
- Research Group of Natural Computing, Department of Computer Science and Artificial Intelligence, University of Seville, Sevilla 41012, Spain
| |
Collapse
|
4
|
Sun M, Wang J, Gong Q, Huang W. Enhancing gland segmentation in colon histology images using an instance-aware diffusion model. Comput Biol Med 2023; 166:107527. [PMID: 37778210 DOI: 10.1016/j.compbiomed.2023.107527] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 08/17/2023] [Accepted: 09/19/2023] [Indexed: 10/03/2023]
Abstract
In pathological image analysis, determination of gland morphology in histology images of the colon is essential to determine the grade of colon cancer. However, manual segmentation of glands is extremely challenging and there is a need to develop automatic methods for segmenting gland instances. Recently, due to the powerful noise-to-image denoising pipeline, the diffusion model has become one of the hot spots in computer vision research and has been explored in the field of image segmentation. In this paper, we propose an instance segmentation method based on the diffusion model that can perform automatic gland instance segmentation. Firstly, we model the instance segmentation process for colon histology images as a denoising process based on a diffusion model. Secondly, to recover details lost during denoising, we use Instance Aware Filters and multi-scale Mask Branch to construct global mask instead of predicting only local masks. Thirdly, to improve the distinction between the object and the background, we apply Conditional Encoding to enhance the intermediate features with the original image encoding. To objectively validate the proposed method, we compared several state-of-the-art deep learning models on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset (165 images), the Colorectal Adenocarcinoma Glands (CRAG) dataset (213 images) and the RINGS dataset (1500 images). Our proposed method obtains significantly improved results for CRAG (Object F1 0.853 ± 0.054, Object Dice 0.906 ± 0.043), GlaS Test A (Object F1 0.941 ± 0.039, Object Dice 0.939 ± 0.060), GlaS Test B (Object F1 0.893 ± 0.073, Object Dice 0.889 ± 0.069), and RINGS dataset (Precision 0.893 ± 0.096, Dice 0.904 ± 0.091). The experimental results show that our method significantly improves the segmentation accuracy, and the experiment results demonstrate the efficacy of the method.
Collapse
Affiliation(s)
- Mengxue Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Jiale Wang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Qingtao Gong
- Ulsan Ship and Ocean College, Ludong University, Yantai, 264025, China
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China.
| |
Collapse
|
5
|
Das R, Bose S, Chowdhury RS, Maulik U. Dense Dilated Multi-Scale Supervised Attention-Guided Network for histopathology image segmentation. Comput Biol Med 2023; 163:107182. [PMID: 37379615 DOI: 10.1016/j.compbiomed.2023.107182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/24/2023] [Accepted: 06/13/2023] [Indexed: 06/30/2023]
Abstract
Over the last couple of decades, the introduction and proliferation of whole-slide scanners led to increasing interest in the research of digital pathology. Although manual analysis of histopathological images is still the gold standard, the process is often tedious and time consuming. Furthermore, manual analysis also suffers from intra- and interobserver variability. Separating structures or grading morphological changes can be difficult due to architectural variability of these images. Deep learning techniques have shown great potential in histopathology image segmentation that drastically reduces the time needed for downstream tasks of analysis and providing accurate diagnosis. However, few algorithms have clinical implementations. In this paper, we propose a new deep learning model Dense Dilated Multiscale Supervised Attention-Guided (D2MSA) Network for histopathology image segmentation that makes use of deep supervision coupled with a hierarchical system of novel attention mechanisms. The proposed model surpasses state-of-the-art performance while using similar computational resources. The performance of the model has been evaluated for the tasks of gland segmentation and nuclei instance segmentation, both of which are clinically relevant tasks to assess the state and progress of malignancy. Here, we have used histopathology image datasets for three different types of cancer. We have also performed extensive ablation tests and hyperparameter tuning to ensure the validity and reproducibility of the model performance. The proposed model is available at www.github.com/shirshabose/D2MSA-Net.
Collapse
Affiliation(s)
- Rangan Das
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Shirsha Bose
- Department of Informatics, Technical University of Munich, Munich, Bavaria 85748, Germany.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
6
|
Khazaee Fadafen M, Rezaee K. Ensemble-based multi-tissue classification approach of colorectal cancer histology images using a novel hybrid deep learning framework. Sci Rep 2023; 13:8823. [PMID: 37258631 DOI: 10.1038/s41598-023-35431-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/17/2023] [Indexed: 06/02/2023] Open
Abstract
Colorectal cancer (CRC) is the second leading cause of cancer death in the world, so digital pathology is essential for assessing prognosis. Due to the increasing resolution and quantity of whole slide images (WSIs), as well as the lack of annotated information, previous methodologies cannot be generalized as effective decision-making systems. Since deep learning (DL) methods can handle large-scale applications, they can provide a viable alternative to histopathology image (HI) analysis. DL architectures, however, may not be sufficient to classify CRC tissues based on anatomical histopathology data. A dilated ResNet (dResNet) structure and attention module are used to generate deep feature maps in order to classify multiple tissues in HIs. In addition, neighborhood component analysis (NCA) overcomes the constraint of computational complexity. Data is fed into a deep support vector machine (SVM) based on an ensemble learning algorithm called DeepSVM after the features have been selected. CRC-5000 and NCT-CRC-HE-100 K datasets were analyzed to validate and test the hybrid procedure. We demonstrate that the hybrid model achieves 98.75% and 99.76% accuracy on CRC datasets. The results showed that only pathologists' labels could successfully classify unseen WSIs. Furthermore, the hybrid deep learning method outperforms state-of-the-art approaches in terms of computational efficiency and time. Using the proposed mechanism for tissue analysis, it will be possible to correctly predict CRC based on accurate pathology image classification.
Collapse
Affiliation(s)
- Masoud Khazaee Fadafen
- Department of Electrical Engineering, Technical and Vocational University (TVU), Tehran, Iran
| | - Khosro Rezaee
- Department of Biomedical Engineering, Meybod University, Meybod, Iran.
| |
Collapse
|
7
|
Ikromjanov K, Bhattacharjee S, Sumon RI, Hwang YB, Rahman H, Lee MJ, Kim HC, Park E, Cho NH, Choi HK. Region Segmentation of Whole-Slide Images for Analyzing Histological Differentiation of Prostate Adenocarcinoma Using Ensemble EfficientNetB2 U-Net with Transfer Learning Mechanism. Cancers (Basel) 2023; 15:cancers15030762. [PMID: 36765719 PMCID: PMC9913745 DOI: 10.3390/cancers15030762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 01/18/2023] [Accepted: 01/23/2023] [Indexed: 01/28/2023] Open
Abstract
Recent advances in computer-aided detection via deep learning (DL) now allow for prostate cancer to be detected automatically and recognized with extremely high accuracy, much like other medical diagnoses and prognoses. However, researchers are still limited by the Gleason scoring system. The histopathological analysis involved in assigning the appropriate score is a rigorous, time-consuming manual process that is constrained by the quality of the material and the pathologist's level of expertise. In this research, we implemented a DL model using transfer learning on a set of histopathological images to segment cancerous and noncancerous areas in whole-slide images (WSIs). In this approach, the proposed Ensemble U-net model was applied for the segmentation of stroma, cancerous, and benign areas. The WSI dataset of prostate cancer was collected from the Kaggle repository, which is publicly available online. A total of 1000 WSIs were used for region segmentation. From this, 8100 patch images were used for training, and 900 for testing. The proposed model demonstrated an average dice coefficient (DC), intersection over union (IoU), and Hausdorff distance of 0.891, 0.811, and 15.9, respectively, on the test set, with corresponding masks of patch images. The manipulation of the proposed segmentation model improves the ability of the pathologist to predict disease outcomes, thus enhancing treatment efficacy by isolating the cancerous regions in WSIs.
Collapse
Affiliation(s)
- Kobiljon Ikromjanov
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Republic of Korea
| | - Subrata Bhattacharjee
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae 50834, Republic of Korea
| | - Rashadul Islam Sumon
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Republic of Korea
| | - Yeong-Byn Hwang
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Republic of Korea
| | - Hafizur Rahman
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Republic of Korea
| | - Myung-Jae Lee
- JLK Artificial Intelligence R&D Center, Seoul 06141, Republic of Korea
| | - Hee-Cheol Kim
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Republic of Korea
| | - Eunhyang Park
- Department of Pathology, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Nam-Hoon Cho
- Department of Pathology, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Heung-Kook Choi
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae 50834, Republic of Korea
- JLK Artificial Intelligence R&D Center, Seoul 06141, Republic of Korea
- Correspondence: ; Tel.: +82-10-6733-3437
| |
Collapse
|
8
|
Barmpoutis P, Waddingham W, Yuan J, Ross C, Kayhanian H, Stathaki T, Alexander DC, Jansen M. A digital pathology workflow for the segmentation and classification of gastric glands: Study of gastric atrophy and intestinal metaplasia cases. PLoS One 2022; 17:e0275232. [PMID: 36584163 PMCID: PMC9803139 DOI: 10.1371/journal.pone.0275232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 09/12/2022] [Indexed: 01/01/2023] Open
Abstract
Gastric cancer is one of the most frequent causes of cancer-related deaths worldwide. Gastric atrophy (GA) and gastric intestinal metaplasia (IM) of the mucosa of the stomach have been found to increase the risk of gastric cancer and are considered precancerous lesions. Therefore, the early detection of GA and IM may have a valuable role in histopathological risk assessment. However, GA and IM are difficult to confirm endoscopically and, following the Sydney protocol, their diagnosis depends on the analysis of glandular morphology and on the identification of at least one well-defined goblet cell in a set of hematoxylin and eosin (H&E) -stained biopsy samples. To this end, the precise segmentation and classification of glands from the histological images plays an important role in the diagnostic confirmation of GA and IM. In this paper, we propose a digital pathology end-to-end workflow for gastric gland segmentation and classification for the analysis of gastric tissues. The proposed GAGL-VTNet, initially, extracts both global and local features combining multi-scale feature maps for the segmentation of glands and, subsequently, it adopts a vision transformer that exploits the visual dependences of the segmented glands towards their classification. For the analysis of gastric tissues, segmentation of mucosa is performed through an unsupervised model combining energy minimization and a U-Net model. Then, features of the segmented glands and mucosa are extracted and analyzed. To evaluate the efficiency of the proposed methodology we created the GAGL dataset consisting of 85 WSI, collected from 20 patients. The results demonstrate the existence of significant differences of the extracted features between normal, GA and IM cases. The proposed approach for gland and mucosa segmentation achieves an object dice score equal to 0.908 and 0.967 respectively, while for the classification of glands it achieves an F1 score equal to 0.94 showing great potential for the automated quantification and analysis of gastric biopsies.
Collapse
Affiliation(s)
- Panagiotis Barmpoutis
- Department of Computer Science, Centre for Medical Image Computing, University College London, London, United Kingdom
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - William Waddingham
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - Jing Yuan
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Christopher Ross
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - Hamzeh Kayhanian
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - Tania Stathaki
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Daniel C. Alexander
- Department of Computer Science, Centre for Medical Image Computing, University College London, London, United Kingdom
| | - Marnix Jansen
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| |
Collapse
|
9
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
10
|
Oner MU, Ng MY, Giron DM, Chen Xi CE, Yuan Xiang LA, Singh M, Yu W, Sung WK, Wong CF, Lee HK. An AI-assisted tool for efficient prostate cancer diagnosis in low-grade and low-volume cases. PATTERNS (NEW YORK, N.Y.) 2022; 3:100642. [PMID: 36569545 PMCID: PMC9768677 DOI: 10.1016/j.patter.2022.100642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 03/30/2022] [Accepted: 11/01/2022] [Indexed: 12/03/2022]
Abstract
Pathologists diagnose prostate cancer by core needle biopsy. In low-grade and low-volume cases, they look for a few malignant glands out of hundreds within a core. They may miss a few malignant glands, resulting in repeat biopsies or missed therapeutic opportunities. This study developed a multi-resolution deep-learning pipeline to assist pathologists in detecting malignant glands in core needle biopsies of low-grade and low-volume cases. Analyzing a gland at multiple resolutions, our model exploited morphology and neighborhood information, which were crucial in prostate gland classification. We developed and tested our pipeline on the slides of a local cohort of 99 patients in Singapore. Besides, we made the images publicly available, becoming the first digital histopathology dataset of patients of Asian ancestry with prostatic carcinoma. Our multi-resolution classification model achieved an area under the receiver operating characteristic curve (AUROC) value of 0.992 (95% confidence interval [CI]: 0.985-0.997) in the external validation study, showing the generalizability of our multi-resolution approach.
Collapse
Affiliation(s)
- Mustafa Umit Oner
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
- School of Computing, National University of Singapore, Singapore 117417, Singapore
- Department of Artificial Intelligence Engineering, Bahcesehir University, Istanbul 34353, Turkey
| | - Mei Ying Ng
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Danilo Medina Giron
- Department of Pathology, Tan Tock Seng Hospital, Singapore 308433, Singapore
| | - Cecilia Ee Chen Xi
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Louis Ang Yuan Xiang
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Malay Singh
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Weimiao Yu
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research (A∗STAR), Singapore 138673, Singapore
| | - Wing-Kin Sung
- School of Computing, National University of Singapore, Singapore 117417, Singapore
- Genome Institute of Singapore, Agency for Science, Technology and Research (A∗STAR), Singapore 138672, Singapore
| | - Chin Fong Wong
- Department of Pathology, Tan Tock Seng Hospital, Singapore 308433, Singapore
| | - Hwee Kuan Lee
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
- School of Computing, National University of Singapore, Singapore 117417, Singapore
- Singapore Eye Research Institute (SERI), Singapore 169856, Singapore
- Image and Pervasive Access Lab (IPAL), Singapore 138632, Singapore
- Rehabilitation Research Institute of Singapore, Singapore 308232, Singapore
- Singapore Institute for Clinical Sciences, Singapore 117609, Singapore
| |
Collapse
|
11
|
Gao E, Jiang H, Zhou Z, Yang C, Chen M, Zhu W, Shi F, Chen X, Zheng J, Bian Y, Xiang D. Automatic multi-tissue segmentation in pancreatic pathological images with selected multi-scale attention network. Comput Biol Med 2022; 151:106228. [PMID: 36306579 DOI: 10.1016/j.compbiomed.2022.106228] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 10/13/2022] [Accepted: 10/16/2022] [Indexed: 12/27/2022]
Abstract
The morphology of tissues in pathological images has been used routinely by pathologists to assess the degree of malignancy of pancreatic ductal adenocarcinoma (PDAC). Automatic and accurate segmentation of tumor cells and their surrounding tissues is often a crucial step to obtain reliable morphological statistics. Nonetheless, it is still a challenge due to the great variation of appearance and morphology. In this paper, a selected multi-scale attention network (SMANet) is proposed to segment tumor cells, blood vessels, nerves, islets and ducts in pancreatic pathological images. The selected multi-scale attention module is proposed to enhance effective information, supplement useful information and suppress redundant information at different scales from the encoder and decoder. It includes selection unit (SU) module and multi-scale attention (MA) module. The selection unit module can effectively filter features. The multi-scale attention module enhances effective information through spatial attention and channel attention, and combines different level features to supplement useful information. This helps learn the information of different receptive fields to improve the segmentation of tumor cells, blood vessels and nerves. An original-feature fusion unit is also proposed to supplement the original image information to reduce the under-segmentation of small tissues such as islets and ducts. The proposed method outperforms state-of-the-arts deep learning algorithms on our PDAC pathological images and achieves competitive results on the GlaS challenge dataset. The mDice and mIoU have reached 0.769 and 0.665 in our PDAC dataset.
Collapse
Affiliation(s)
- Enting Gao
- School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
| | - Hui Jiang
- Department of Pathology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Zhibang Zhou
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Changxing Yang
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Muyang Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Weifang Zhu
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Jian Zheng
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Jiangsu 215163, China
| | - Yun Bian
- Department of Radiology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Dehui Xiang
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China.
| |
Collapse
|
12
|
Ben Hamida A, Devanne M, Weber J, Truntzer C, Derangère V, Ghiringhelli F, Forestier G, Wemmert C. Weakly Supervised Learning using Attention gates for colon cancer histopathological image segmentation. Artif Intell Med 2022; 133:102407. [PMID: 36328667 DOI: 10.1016/j.artmed.2022.102407] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 09/07/2022] [Accepted: 09/15/2022] [Indexed: 02/08/2023]
Abstract
Recently, Artificial Intelligence namely Deep Learning methods have revolutionized a wide range of domains and applications. Besides, Digital Pathology has so far played a major role in the diagnosis and the prognosis of tumors. However, the characteristics of the Whole Slide Images namely the gigapixel size, high resolution and the shortage of richly labeled samples have hindered the efficiency of classical Machine Learning methods. That goes without saying that traditional methods are poor in generalization to different tasks and data contents. Regarding the success of Deep learning when dealing with Large Scale applications, we have resorted to the use of such models for histopathological image segmentation tasks. First, we review and compare the classical UNet and Att-UNet models for colon cancer WSI segmentation in a sparsely annotated data scenario. Then, we introduce novel enhanced models of the Att-UNet where different schemes are proposed for the skip connections and spatial attention gates positions in the network. In fact, spatial attention gates assist the training process and enable the model to avoid irrelevant feature learning. Alternating the presence of such modules namely in our Alter-AttUNet model adds robustness and ensures better image segmentation results. In order to cope with the lack of richly annotated data in our AiCOLO colon cancer dataset, we suggest the use of a multi-step training strategy that also deals with the WSI sparse annotations and unbalanced class issues. All proposed methods outperform state-of-the-art approaches but Alter-AttUNet generates the best compromise between accurate results and light network. The model achieves 95.88% accuracy with our sparse AiCOLO colon cancer datasets. Finally, to evaluate and validate our proposed architectures we resort to publicly available WSI data: the NCT-CRC-HE-100K, the CRC-5000 and the Warwick colon cancer histopathological dataset. Respective accuracies of 99.65%, 99.73% and 79.03% were reached. A comparison with state-of-art approaches is established to view and compare the key solutions for histopathological image segmentation.
Collapse
Affiliation(s)
| | - M Devanne
- IRIMAS, University of Haute-Alsace, France
| | - J Weber
- IRIMAS, University of Haute-Alsace, France
| | - C Truntzer
- Platform of Transform in Biological Oncology, Dijon, France
| | - V Derangère
- Platform of Transform in Biological Oncology, Dijon, France
| | - F Ghiringhelli
- Platform of Transform in Biological Oncology, Dijon, France
| | | | - C Wemmert
- ICube, University of Strasbourg, France
| |
Collapse
|
13
|
Zhou S, Nie D, Adeli E, Wei Q, Ren X, Liu X, Zhu E, Yin J, Wang Q, Shen D. Semantic instance segmentation with discriminative deep supervision for medical images. Med Image Anal 2022; 82:102626. [DOI: 10.1016/j.media.2022.102626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 08/21/2022] [Accepted: 09/10/2022] [Indexed: 10/31/2022]
|
14
|
A Novel Method Based on GAN Using a Segmentation Module for Oligodendroglioma Pathological Image Generation. SENSORS 2022; 22:s22103960. [PMID: 35632368 PMCID: PMC9144585 DOI: 10.3390/s22103960] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 04/22/2022] [Accepted: 05/20/2022] [Indexed: 02/05/2023]
Abstract
Digital pathology analysis using deep learning has been the subject of several studies. As with other medical data, pathological data are not easily obtained. Because deep learning-based image analysis requires large amounts of data, augmentation techniques are used to increase the size of pathological datasets. This study proposes a novel method for synthesizing brain tumor pathology data using a generative model. For image synthesis, we used embedding features extracted from a segmentation module in a general generative model. We also introduce a simple solution for training a segmentation model in an environment in which the masked label of the training dataset is not supplied. As a result of this experiment, the proposed method did not make great progress in quantitative metrics but showed improved results in the confusion rate of more than 70 subjects and the quality of the visual output.
Collapse
|
15
|
Rastogi P, Khanna K, Singh V. Gland segmentation in colorectal cancer histopathological images using U-net inspired convolutional network. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06687-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
16
|
Zhang Z, Tian C, Bai HX, Jiao Z, Tian X. Discriminative Error Prediction Network for Semi-supervised Colon Gland Segmentation. Med Image Anal 2022; 79:102458. [DOI: 10.1016/j.media.2022.102458] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 04/10/2022] [Accepted: 04/11/2022] [Indexed: 10/18/2022]
|
17
|
Deep Learning on Histopathological Images for Colorectal Cancer Diagnosis: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12040837. [PMID: 35453885 PMCID: PMC9028395 DOI: 10.3390/diagnostics12040837] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 02/04/2023] Open
Abstract
Colorectal cancer (CRC) is the second most common cancer in women and the third most common in men, with an increasing incidence. Pathology diagnosis complemented with prognostic and predictive biomarker information is the first step for personalized treatment. The increased diagnostic load in the pathology laboratory, combined with the reported intra- and inter-variability in the assessment of biomarkers, has prompted the quest for reliable machine-based methods to be incorporated into the routine practice. Recently, Artificial Intelligence (AI) has made significant progress in the medical field, showing potential for clinical applications. Herein, we aim to systematically review the current research on AI in CRC image analysis. In histopathology, algorithms based on Deep Learning (DL) have the potential to assist in diagnosis, predict clinically relevant molecular phenotypes and microsatellite instability, identify histological features related to prognosis and correlated to metastasis, and assess the specific components of the tumor microenvironment.
Collapse
|
18
|
A promising deep learning-assistive algorithm for histopathological screening of colorectal cancer. Sci Rep 2022; 12:2222. [PMID: 35140318 PMCID: PMC8828883 DOI: 10.1038/s41598-022-06264-x] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 01/24/2022] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer is one of the most common cancers worldwide, accounting for an annual estimated 1.8 million incident cases. With the increasing number of colonoscopies being performed, colorectal biopsies make up a large proportion of any histopathology laboratory workload. We trained and validated a unique artificial intelligence (AI) deep learning model as an assistive tool to screen for colonic malignancies in colorectal specimens, in order to improve cancer detection and classification; enabling busy pathologists to focus on higher order decision-making tasks. The study cohort consists of Whole Slide Images (WSI) obtained from 294 colorectal specimens. Qritive’s unique composite algorithm comprises both a deep learning model based on a Faster Region Based Convolutional Neural Network (Faster-RCNN) architecture for instance segmentation with a ResNet-101 feature extraction backbone that provides glandular segmentation, and a classical machine learning classifier. The initial training used pathologists’ annotations on a cohort of 66,191 image tiles extracted from 39 WSIs. A subsequent application of a classical machine learning-based slide classifier sorted the WSIs into ‘low risk’ (benign, inflammation) and ‘high risk’ (dysplasia, malignancy) categories. We further trained the composite AI-model’s performance on a larger cohort of 105 resections WSIs and then validated our findings on a cohort of 150 biopsies WSIs against the classifications of two independently blinded pathologists. We evaluated the area under the receiver-operator characteristic curve (AUC) and other performance metrics. The AI model achieved an AUC of 0.917 in the validation cohort, with excellent sensitivity (97.4%) in detection of high risk features of dysplasia and malignancy. We demonstrate an unique composite AI-model incorporating both a glandular segmentation deep learning model and a classical machine learning classifier, with excellent sensitivity in picking up high risk colorectal features. As such, AI plays a role as a potential screening tool in assisting busy pathologists by outlining the dysplastic and malignant glands.
Collapse
|
19
|
Xie Y, Zhang J, Liao Z, Verjans J, Shen C, Xia Y. Intra- and Inter-Pair Consistency for Semi-Supervised Gland Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:894-905. [PMID: 34951847 DOI: 10.1109/tip.2021.3136716] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Accurate gland segmentation in histology tissue images is a critical but challenging task. Although deep models have demonstrated superior performance in medical image segmentation, they commonly require a large amount of annotated data, which are hard to obtain due to the extensive labor costs and expertise required. In this paper, we propose an intra- and inter-pair consistency-based semi-supervised (I2CS) model that can be trained on both labeled and unlabeled histology images for gland segmentation. Considering that each image contains glands and hence different images could potentially share consistent semantics in the feature space, we introduce a novel intra- and inter-pair consistency module to explore such consistency for learning with unlabeled data. It first characterizes the pixel-level relation between a pair of images in the feature space to create an attention map that highlights the regions with the same semantics but on different images. Then, it imposes a consistency constraint on the attention maps obtained from multiple image pairs, and thus filters low-confidence attention regions to generate refined attention maps that are then merged with original features to improve their representation ability. In addition, we also design an object-level loss to address the issues caused by touching glands. We evaluated our model against several recent gland segmentation methods and three typical semi-supervised methods on the GlaS and CRAG datasets. Our results not only demonstrate the effectiveness of the proposed due consistency module and Obj-Dice loss, but also indicate that the proposed I2CS model achieves state-of-the-art gland segmentation performance on both benchmarks.
Collapse
|
20
|
Wang H, Xian M, Vakanski A. TA-Net: Topology-Aware Network for Gland Segmentation. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2022; 2022:3241-3249. [PMID: 35509894 PMCID: PMC9063467 DOI: 10.1109/wacv51458.2022.00330] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Gland segmentation is a critical step to quantitatively assess the morphology of glands in histopathology image analysis. However, it is challenging to separate densely clustered glands accurately. Existing deep learning-based approaches attempted to use contour-based techniques to alleviate this issue but only achieved limited success. To address this challenge, we propose a novel topology-aware network (TA-Net) to accurately separate densely clustered and severely deformed glands. The proposed TA-Net has a multitask learning architecture and enhances the generalization of gland segmentation by learning shared representation from two tasks: instance segmentation and gland topology estimation. The proposed topology loss computes gland topology using gland skeletons and markers. It drives the network to generate segmentation results that comply with the true gland topology. We validate the proposed approach on the GlaS and CRAG datasets using three quantitative metrics, F1-score, object-level Dice coefficient, and object-level Hausdorff distance. Extensive experiments demonstrate that TA-Net achieves state-of-the-art performance on the two datasets. TA-Net outperforms other approaches in the presence of densely clustered glands.
Collapse
|
21
|
Li H, Li J, Kang Y, Wang C, Liu F, Hui W, Bo Q, Cui L, Feng J, Yang L. A Novel Encoding and Decoding Calibration Guiding Pathway for Pathological Image Analysis. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:267-274. [PMID: 32915745 DOI: 10.1109/tcbb.2020.3023467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Diagnostic pathology is the foundation and gold standard for identifying carcinomas, and the accurate quantification of pathological images can provide objective clues for pathologists to make more convincing diagnosis. Recently, the encoder-decoder architectures (EDAs) of convolutional neural networks (CNNs) are widely used in the analysis of pathological images. Despite the rapid innovation of EDAs, we have conducted extensive experiments based on a variety of commonly used EDAs, and found them cannot handle the interference of complex background in pathological images, making the architectures unable to focus on the regions of interest (RoIs), thus making the quantitative results unreliable. Therefore, we proposed a pathway named GLobal Bank (GLB) to guide the encoder and the decoder to extract more features of RoIs rather than the complex background. Sufficient experiments have proved that the architecture remoulded by GLB can achieve significant performance improvement, and the quantitative results are more accurate.
Collapse
|
22
|
Elameer AS, Jaber MM, Abd SK. Radiography image analysis using cat swarm optimized deep belief networks. JOURNAL OF INTELLIGENT SYSTEMS 2021; 31:40-54. [DOI: 10.1515/jisys-2021-0172] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023] Open
Abstract
Abstract
Radiography images are widely utilized in the health sector to recognize the patient health condition. The noise and irrelevant region information minimize the entire disease detection accuracy and computation complexity. Therefore, in this study, statistical Kolmogorov–Smirnov test has been integrated with wavelet transform to overcome the de-noising issues. Then the cat swarm-optimized deep belief network is applied to extract the features from the affected region. The optimized deep learning model reduces the feature training cost and time and improves the overall disease detection accuracy. The network learning process is enhanced according to the AdaDelta learning process, which replaces the learning parameter with a delta value. This process minimizes the error rate while recognizing the disease. The efficiency of the system evaluated using image retrieval in medical application dataset. This process helps to determine the various diseases such as breast, lung, and pediatric studies.
Collapse
Affiliation(s)
- Amer S. Elameer
- Biomedical Informatics College, University of Information Technology and Communications (UOITC) , Baghdad , Iraq
| | - Mustafa Musa Jaber
- Department of Computer Science, Dijlah University Collage , Baghdad , 00964 , Iraq
- Department of Computer Science, Al-Turath University College , Baghdad , Iraq
| | - Sura Khalil Abd
- Department of Computer Science, Dijlah University Collage , Baghdad , 00964 , Iraq
| |
Collapse
|
23
|
Lyu T, Yang G, Zhao X, Shu H, Luo L, Chen D, Xiong J, Yang J, Li S, Coatrieux JL, Chen Y. Dissected aorta segmentation using convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106417. [PMID: 34587564 DOI: 10.1016/j.cmpb.2021.106417] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 09/12/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Aortic dissection is a severe cardiovascular pathology in which an injury of the intimal layer of the aorta allows blood flowing into the aortic wall, forcing the wall layers apart. Such situation presents a high mortality rate and requires an in-depth understanding of the 3-D morphology of the dissected aorta to plan the right treatment. An accurate automatic segmentation algorithm is therefore needed. METHOD In this paper, we propose a deep-learning-based algorithm to segment dissected aorta on computed tomography angiography (CTA) images. The algorithm consists of two steps. Firstly, a 3-D convolutional neural network (CNN) is applied to divide the 3-D volume into two anatomical portions. Secondly, two 2-D CNNs based on pyramid scene parsing network (PSPnet) segment each specific portion separately. An edge extraction branch was added to the 2-D model to get higher segmentation accuracy on intimal flap area. RESULTS The experiments conducted and the comparisons made show that the proposed solution performs well with an average dice index over 92%. The combination of 3-D and 2-D models improves the aorta segmentation accuracy compared to 3-D only models and the segmentation robustness compared to 2-D only models. The edge extraction branch improves the DICE index near aorta boundaries from 73.41% to 81.39%. CONCLUSIONS The proposed algorithm has satisfying performance for capturing the aorta structure while avoiding false positives on the intimal flaps.
Collapse
Affiliation(s)
- Tianling Lyu
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, China
| | - Guanyu Yang
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, China
| | - Xingran Zhao
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, China
| | - Huazhong Shu
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, China
| | - Limin Luo
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, China
| | - Duanduan Chen
- Department of Biomedical Engineering, Beijing Institute of Technology, Beijing, China
| | | | - Jian Yang
- School of Optoelectronics, Beijing Institute of Technology, Beijing, China
| | - Shuo Li
- Digital Imaging Group of London, London, Canada
| | | | - Yang Chen
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, China; School of Cyber Science and Engineering, Southeast University, Nanjing, China; Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing, China.
| |
Collapse
|
24
|
Shan D, Zheng J, Klimowicz A, Panzenbeck M, Liu Z, Feng D. Deep learning for discovering pathological continuum of crypts and evaluating therapeutic effects: An implication for in vivo preclinical study. PLoS One 2021; 16:e0252429. [PMID: 34125849 PMCID: PMC8202954 DOI: 10.1371/journal.pone.0252429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 05/16/2021] [Indexed: 11/21/2022] Open
Abstract
Applying deep learning to the field of preclinical in vivo studies is a new and exciting prospect with the potential to unlock decades worth of underutilized data. As a proof of concept, we performed a feasibility study on a colitis model treated with Sulfasalazine, a drug used in therapeutic care of inflammatory bowel disease. We aimed to evaluate the colonic mucosa improvement associated with the recovery response of the crypts, a complex histologic structure reflecting tissue homeostasis and repair in response to inflammation. Our approach requires robust image segmentation of objects of interest from whole slide images, a composite low dimensional representation of the typical or novel morphological variants of the segmented objects, and exploration of image features of significance towards biology and treatment efficacy. Both interpretable features (eg. counts, area, distance and angle) as well as statistical texture features calculated using Gray Level Co-Occurance Matrices (GLCMs), are shown to have significance in analysis. Ultimately, this analytic framework of supervised image segmentation, unsupervised learning, and feature analysis can be generally applied to preclinical data. We hope our report will inspire more efforts to utilize deep learning in preclinical in vivo studies and ultimately make the field more innovative and efficient.
Collapse
Affiliation(s)
- Dechao Shan
- Global Computational Biology and Digital Sciences, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Jie Zheng
- Immunology and Respiratory Disease Research, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Alexander Klimowicz
- Immunology and Respiratory Disease Research, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Mark Panzenbeck
- Immunology and Respiratory Disease Research, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Zheng Liu
- Global Computational Biology and Digital Sciences, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Di Feng
- Global Computational Biology and Digital Sciences, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| |
Collapse
|
25
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
26
|
A deep cascade of ensemble of dual domain networks with gradient-based T1 assistance and perceptual refinement for fast MRI reconstruction. Comput Med Imaging Graph 2021; 91:101942. [PMID: 34087612 DOI: 10.1016/j.compmedimag.2021.101942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 05/03/2021] [Accepted: 05/14/2021] [Indexed: 11/23/2022]
Abstract
Deep learning networks have shown promising results in fast magnetic resonance imaging (MRI) reconstruction. In our work, we develop deep networks to further improve the quantitative and the perceptual quality of reconstruction. To begin with, we propose reconsynergynet (RSN), a network that combines the complementary benefits of independently operating on both the image and the Fourier domain. For a single-coil acquisition, we introduce deep cascade RSN (DC-RSN), a cascade of RSN blocks interleaved with data fidelity (DF) units. Secondly, we improve the structure recovery of DC-RSN for T2 weighted Imaging (T2WI) through assistance of T1 weighted imaging (T1WI), a sequence with short acquisition time. T1 assistance is provided to DC-RSN through a gradient of log feature (GOLF) fusion. Furthermore, we propose perceptual refinement network (PRN) to refine the reconstructions for better visual information fidelity (VIF), a metric highly correlated to radiologist's opinion on the image quality. Lastly, for multi-coil acquisition, we propose variable splitting RSN (VS-RSN), a deep cascade of blocks, each block containing RSN, multi-coil DF unit, and a weighted average module. We extensively validate our models DC-RSN and VS-RSN for single-coil and multi-coil acquisitions and report the state-of-the-art performance. We obtain a SSIM of 0.768, 0.923, and 0.878 for knee single-coil-4x, multi-coil-4x, and multi-coil-8x in fastMRI, respectively. We also conduct experiments to demonstrate the efficacy of GOLF based T1 assistance and PRN.
Collapse
|
27
|
Chen Z, Chen Z, Liu J, Zheng Q, Zhu Y, Zuo Y, Wang Z, Guan X, Wang Y, Li Y. Weakly Supervised Histopathology Image Segmentation With Sparse Point Annotations. IEEE J Biomed Health Inform 2021; 25:1673-1685. [PMID: 32931437 DOI: 10.1109/jbhi.2020.3024262] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Digital histopathology image segmentation can facilitate computer-assisted cancer diagnostics. Given the difficulty of obtaining manual annotations, weak supervision is more suitable for the task than full supervision is. However, most weakly supervised models are not ideal for handling severe intra-class heterogeneity and inter-class homogeneity in histopathology images. Therefore, we propose a novel end-to-end weakly supervised learning framework named WESUP. With only sparse point annotations, it performs accurate segmentation and exhibits good generalizability. The training phase comprises two major parts, hierarchical feature representation and deep dynamic label propagation. The former uses superpixels to capture local details and global context from the convolutional feature maps obtained via transfer learning. The latter recognizes the manifold structure of the hierarchical features and identifies potential targets with the sparse annotations. Moreover, these two parts are trained jointly to improve the performance of the whole framework. To further boost test performance, pixel-wise inference is adopted for finer prediction. As demonstrated by experimental results, WESUP is able to largely resolve the confusion between histological foreground and background. It outperforms several state-of-the-art weakly supervised methods on a variety of histopathology datasets with minimal annotation efforts. Trained by very sparse point annotations, WESUP can even beat an advanced fully supervised segmentation network.
Collapse
|
28
|
Salvi M, Bosco M, Molinaro L, Gambella A, Papotti M, Acharya UR, Molinari F. A hybrid deep learning approach for gland segmentation in prostate histopathological images. Artif Intell Med 2021; 115:102076. [PMID: 34001325 DOI: 10.1016/j.artmed.2021.102076] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 04/08/2021] [Accepted: 04/10/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND In digital pathology, the morphology and architecture of prostate glands have been routinely adopted by pathologists to evaluate the presence of cancer tissue. The manual annotations are operator-dependent, error-prone and time-consuming. The automated segmentation of prostate glands can be very challenging too due to large appearance variation and serious degeneration of these histological structures. METHOD A new image segmentation method, called RINGS (Rapid IdentificatioN of Glandural Structures), is presented to segment prostate glands in histopathological images. We designed a novel glands segmentation strategy using a multi-channel algorithm that exploits and fuses both traditional and deep learning techniques. Specifically, the proposed approach employs a hybrid segmentation strategy based on stroma detection to accurately detect and delineate the prostate glands contours. RESULTS Automated results are compared with manual annotations and seven state-of-the-art techniques designed for glands segmentation. Being based on stroma segmentation, no performance degradation is observed when segmenting healthy or pathological structures. Our method is able to delineate the prostate gland of the unknown histopathological image with a dice score of 90.16 % and outperforms all the compared state-of-the-art methods. CONCLUSIONS To the best of our knowledge, the RINGS algorithm is the first fully automated method capable of maintaining a high sensitivity even in the presence of severe glandular degeneration. The proposed method will help to detect the prostate glands accurately and assist the pathologists to make accurate diagnosis and treatment. The developed model can be used to support prostate cancer diagnosis in polyclinics and community care centres.
Collapse
Affiliation(s)
- Massimo Salvi
- Politecnico di Torino, PoliTo(BIO)Med Lab, Biolab, Department of Electronics and Telecommunications, Corso Duca degli Abruzzi 24, Turin, 10129, Italy.
| | - Martino Bosco
- San Lazzaro Hospital, Department of Pathology, Via Petrino Belli 26, Alba, 12051, Italy
| | - Luca Molinaro
- A.O.U. Città della Salute e della Scienza Hospital, Division of Pathology, Corso Bramante 88, Turin, 10126, Italy
| | - Alessandro Gambella
- A.O.U. Città della Salute e della Scienza Hospital, Division of Pathology, Corso Bramante 88, Turin, 10126, Italy
| | - Mauro Papotti
- University of Turin, Division of Pathology, Department of Oncology, Via Santena 5, Turin, 10126, Italy
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Clementi, 599491, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan
| | - Filippo Molinari
- Politecnico di Torino, PoliTo(BIO)Med Lab, Biolab, Department of Electronics and Telecommunications, Corso Duca degli Abruzzi 24, Turin, 10129, Italy
| |
Collapse
|
29
|
Wen Z, Feng R, Liu J, Li Y, Ying S. GCSBA-Net: Gabor-Based and Cascade Squeeze Bi-Attention Network for Gland Segmentation. IEEE J Biomed Health Inform 2021; 25:1185-1196. [PMID: 32780703 DOI: 10.1109/jbhi.2020.3015844] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Colorectal cancer is the second and the third most common cancer in women and men, respectively. Pathological diagnosis is the "gold standard" for tumor diagnosis. Accurate segmentation of glands from tissue images is a crucial step in assisting pathologists in their diagnosis. The typical methods for gland segmentation form a dense image representation, ignoring its texture and multi-scale attention information. Therefore, we utilize a Gabor-based module to extract texture information at different scales and directions in histopathology images. This paper also designs a Cascade Squeeze Bi-Attention (CSBA) module. Specifically, we add Atrous Cascade Spatial Pyramid (ACSP), Squeeze Position Attention (SPA) module and Squeeze Channel Attention module (SCA) to model semantic correlation and maintain the multi-level aggregation on the spatial pyramid with different dilations. Besides, to solve the imbalance of data distribution and boundary blur, we propose a hybrid loss function to response the object boudary better. The experimental results show that the proposed method achieves state-of-the-art performance on the GlaS challenge dataset and CRAG colorectal adenocarcinoma dataset, respectively.
Collapse
|
30
|
Ke R, Bugeau A, Papadakis N, Kirkland M, Schuetz P, Schonlieb CB. Multi-Task Deep Learning for Image Segmentation Using Recursive Approximation Tasks. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3555-3567. [PMID: 33667164 DOI: 10.1109/tip.2021.3062726] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Fully supervised deep neural networks for segmentation usually require a massive amount of pixel-level labels which are manually expensive to create. In this work, we develop a multi-task learning method to relax this constraint. We regard the segmentation problem as a sequence of approximation subproblems that are recursively defined and in increasing levels of approximation accuracy. The subproblems are handled by a framework that consists of 1) a segmentation task that learns from pixel-level ground truth segmentation masks of a small fraction of the images, 2) a recursive approximation task that conducts partial object regions learning and data-driven mask evolution starting from partial masks of each object instance, and 3) other problem oriented auxiliary tasks that are trained with sparse annotations and promote the learning of dedicated features. Most training images are only labeled by (rough) partial masks, which do not contain exact object boundaries, rather than by their full segmentation masks. During the training phase, the approximation task learns the statistics of these partial masks, and the partial regions are recursively increased towards object boundaries aided by the learned information from the segmentation task in a fully data-driven fashion. The network is trained on an extremely small amount of precisely segmented images and a large set of coarse labels. Annotations can thus be obtained in a cheap way. We demonstrate the efficiency of our approach in three applications with microscopy images and ultrasound images.
Collapse
|
31
|
Abstract
Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis. The analysis of such images is time and resource-consuming and very challenging even for experienced pathologists, resulting in inter-observer and intra-observer disagreements. One of the ways of accelerating such an analysis is to use computer-aided diagnosis (CAD) systems. This paper presents a review on machine learning methods for histopathological image analysis, including shallow and deep learning methods. We also cover the most common tasks in HI analysis, such as segmentation and feature extraction. Besides, we present a list of publicly available and private datasets that have been used in HI research.
Collapse
|
32
|
Esteva A, Chou K, Yeung S, Naik N, Madani A, Mottaghi A, Liu Y, Topol E, Dean J, Socher R. Deep learning-enabled medical computer vision. NPJ Digit Med 2021; 4:5. [PMID: 33420381 PMCID: PMC7794558 DOI: 10.1038/s41746-020-00376-2] [Citation(s) in RCA: 248] [Impact Index Per Article: 82.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 12/01/2020] [Indexed: 02/07/2023] Open
Abstract
A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields-including medicine-to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit-including cardiology, pathology, dermatology, ophthalmology-and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.
Collapse
Affiliation(s)
| | | | | | - Nikhil Naik
- Salesforce AI Research, San Francisco, CA, USA
| | - Ali Madani
- Salesforce AI Research, San Francisco, CA, USA
| | | | - Yun Liu
- Google Research, Mountain View, CA, USA
| | - Eric Topol
- Scripps Research Translational Institute, La Jolla, CA, USA
| | - Jeff Dean
- Google Research, Mountain View, CA, USA
| | | |
Collapse
|
33
|
Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Med Image Anal 2021; 67:101813. [PMID: 33049577 PMCID: PMC7725956 DOI: 10.1016/j.media.2020.101813] [Citation(s) in RCA: 209] [Impact Index Per Article: 69.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 05/12/2020] [Accepted: 08/09/2020] [Indexed: 12/14/2022]
Abstract
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
Collapse
Affiliation(s)
- Chetan L Srinidhi
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada.
| | - Ozan Ciga
- Department of Medical Biophysics, University of Toronto, Canada
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada
| |
Collapse
|
34
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 89] [Impact Index Per Article: 29.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
35
|
Xie Y, Zhang J, Lu H, Shen C, Xia Y. SESV: Accurate Medical Image Segmentation by Predicting and Correcting Errors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:286-296. [PMID: 32956049 DOI: 10.1109/tmi.2020.3025308] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Medical image segmentation is an essential task in computer-aided diagnosis. Despite their prevalence and success, deep convolutional neural networks (DCNNs) still need to be improved to produce accurate and robust enough segmentation results for clinical use. In this paper, we propose a novel and generic framework called Segmentation-Emendation-reSegmentation-Verification (SESV) to improve the accuracy of existing DCNNs in medical image segmentation, instead of designing a more accurate segmentation model. Our idea is to predict the segmentation errors produced by an existing model and then correct them. Since predicting segmentation errors is challenging, we design two ways to tolerate the mistakes in the error prediction. First, rather than using a predicted segmentation error map to correct the segmentation mask directly, we only treat the error map as the prior that indicates the locations where segmentation errors are prone to occur, and then concatenate the error map with the image and segmentation mask as the input of a re-segmentation network. Second, we introduce a verification network to determine whether to accept or reject the refined mask produced by the re-segmentation network on a region-by-region basis. The experimental results on the CRAG, ISIC, and IDRiD datasets suggest that using our SESV framework can improve the accuracy of DeepLabv3+ substantially and achieve advanced performance in the segmentation of gland cells, skin lesions, and retinal microaneurysms. Consistent conclusions can also be drawn when using PSPNet, U-Net, and FPN as the segmentation network, respectively. Therefore, our SESV framework is capable of improving the accuracy of different DCNNs on different medical image segmentation tasks.
Collapse
|
36
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
37
|
Gunesli GN, Sokmensuer C, Gunduz-Demir C. AttentionBoost: Learning What to Attend for Gland Segmentation in Histopathological Images by Boosting Fully Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4262-4273. [PMID: 32780699 DOI: 10.1109/tmi.2020.3015198] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Fully convolutional networks (FCNs) are widely used for instance segmentation. One important challenge is to sufficiently train these networks to yield good generalizations for hard-to-learn pixels, correct prediction of which may greatly affect the success. A typical group of such hard-to-learn pixels are boundaries between instances. Many studies have developed strategies to pay more attention to learning these boundary pixels. They include designing multi-task networks with an additional task of boundary prediction and increasing the weights of boundary pixels' predictions in the loss function. Such strategies require defining what to attend beforehand and incorporating this defined attention to the learning model. However, there may exist other groups of hard-to-learn pixels and manually defining and incorporating the appropriate attention for each group may not be feasible. In order to provide an adaptable solution to learn different groups of hard-to-learn pixels, this article proposes AttentionBoost, which is a new multi-attention learning model based on adaptive boosting, for the task of gland instance segmentation in histopathological images. AttentionBoost designs a multi-stage network and introduces a new loss adjustment mechanism for an FCN to adaptively learn what to attend at each stage directly on image data without necessitating any prior definition. This mechanism modulates the attention of each stage to correct the mistakes of previous stages, by adjusting the loss weight of each pixel prediction separately with respect to how accurate the previous stages are on this pixel. Working on histopathological images of colon tissues, our experiments demonstrate that the proposed AttentionBoost model improves the results of gland segmentation compared to its counterparts.
Collapse
|
38
|
Wei D, Lin Z, Franco-Barranco D, Wendt N, Liu X, Yin W, Huang X, Gupta A, Jang WD, Wang X, Arganda-Carreras I, Lichtman JW, Pfister H. MitoEM Dataset: Large-scale 3D Mitochondria Instance Segmentation from EM Images. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12265:66-76. [PMID: 33283212 PMCID: PMC7713709 DOI: 10.1007/978-3-030-59722-1_7] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Electron microscopy (EM) allows the identification of intracellular organelles such as mitochondria, providing insights for clinical and scientific studies. However, public mitochondria segmentation datasets only contain hundreds of instances with simple shapes. It is unclear if existing methods achieving human-level accuracy on these small datasets are robust in practice. To this end, we introduce the MitoEM dataset, a 3D mitochondria instance segmentation dataset with two (30μm)3 volumes from human and rat cortices respectively, 3, 600× larger than previous benchmarks. With around 40K instances, we find a great diversity of mitochondria in terms of shape and density. For evaluation, we tailor the implementation of the average precision (AP) metric for 3D data with a 45× speedup. On MitoEM, we find existing instance segmentation methods often fail to correctly segment mitochondria with complex shapes or close contacts with other instances. Thus, our MitoEM dataset poses new challenges to the field. We release our code and data: https://donglaiw.github.io/page/mitoEM/index.html.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | - Ignacio Arganda-Carreras
- Donostia International Physics Center
- University of the Basque Country
- Ikerbasque, Basque Foundation for Science
| | | | | |
Collapse
|
39
|
Deng S, Zhang X, Yan W, Chang EIC, Fan Y, Lai M, Xu Y. Deep learning in digital pathology image analysis: a survey. Front Med 2020; 14:470-487. [PMID: 32728875 DOI: 10.1007/s11684-020-0782-9] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 03/05/2020] [Indexed: 12/21/2022]
Abstract
Deep learning (DL) has achieved state-of-the-art performance in many digital pathology analysis tasks. Traditional methods usually require hand-crafted domain-specific features, and DL methods can learn representations without manually designed features. In terms of feature extraction, DL approaches are less labor intensive compared with conventional machine learning methods. In this paper, we comprehensively summarize recent DL-based image analysis studies in histopathology, including different tasks (e.g., classification, semantic segmentation, detection, and instance segmentation) and various applications (e.g., stain normalization, cell/gland/region structure analysis). DL methods can provide consistent and accurate outcomes. DL is a promising tool to assist pathologists in clinical diagnosis.
Collapse
Affiliation(s)
- Shujian Deng
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Xin Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Wen Yan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | | | - Yubo Fan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou, 310007, China
| | - Yan Xu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China.
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China.
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
- Microsoft Research Asia, Beijing, 100080, China.
| |
Collapse
|
40
|
Thakur N, Yoon H, Chong Y. Current Trends of Artificial Intelligence for Colorectal Cancer Pathology Image Analysis: A Systematic Review. Cancers (Basel) 2020; 12:E1884. [PMID: 32668721 PMCID: PMC7408874 DOI: 10.3390/cancers12071884] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 07/06/2020] [Accepted: 07/09/2020] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers requiring early pathologic diagnosis using colonoscopy biopsy samples. Recently, artificial intelligence (AI) has made significant progress and shown promising results in the field of medicine despite several limitations. We performed a systematic review of AI use in CRC pathology image analysis to visualize the state-of-the-art. Studies published between January 2000 and January 2020 were searched in major online databases including MEDLINE (PubMed, Cochrane Library, and EMBASE). Query terms included "colorectal neoplasm," "histology," and "artificial intelligence." Of 9000 identified studies, only 30 studies consisting of 40 models were selected for review. The algorithm features of the models were gland segmentation (n = 25, 62%), tumor classification (n = 8, 20%), tumor microenvironment characterization (n = 4, 10%), and prognosis prediction (n = 3, 8%). Only 20 gland segmentation models met the criteria for quantitative analysis, and the model proposed by Ding et al. (2019) performed the best. Studies with other features were in the elementary stage, although most showed impressive results. Overall, the state-of-the-art is promising for CRC pathological analysis. However, datasets in most studies had relatively limited scale and quality for clinical application of this technique. Future studies with larger datasets and high-quality annotations are required for routine practice-level validation.
Collapse
Affiliation(s)
- Nishant Thakur
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Korea;
| | - Hongjun Yoon
- AI Lab, Deepnoid, #1305 E&C Venture Dream Tower 2, 55, Digital-ro 33-Gil, Guro-gu, Seoul 06216, Korea;
| | - Yosep Chong
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Korea;
| |
Collapse
|
41
|
Yan Z, Yang X, Cheng KT. Enabling a Single Deep Learning Model for Accurate Gland Instance Segmentation: A Shape-Aware Adversarial Learning Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2176-2189. [PMID: 31944936 DOI: 10.1109/tmi.2020.2966594] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Segmenting gland instances in histology images is highly challenging as it requires not only detecting glands from a complex background but also separating each individual gland instance with accurate boundary detection. However, due to the boundary uncertainty problem in manual annotations, pixel-to-pixel matching based loss functions are too restrictive for simultaneous gland detection and boundary detection. State-of-the-art approaches adopted multi-model schemes, resulting in unnecessarily high model complexity and difficulties in the training process. In this paper, we propose to use one single deep learning model for accurate gland instance segmentation. To address the boundary uncertainty problem, instead of pixel-to-pixel matching, we propose a segment-level shape similarity measure to calculate the curve similarity between each annotated boundary segment and the corresponding detected boundary segment within a fixed searching range. As the segment-level measure allows location variations within a fixed range for shape similarity calculation, it has better tolerance to boundary uncertainty and is more effective for boundary detection. Furthermore, by adjusting the radius of the searching range, the segment-level shape similarity measure is able to deal with different levels of boundary uncertainty. Therefore, in our framework, images of different scales are down-sampled and integrated to provide both global and local contextual information for training, which is helpful in segmenting gland instances of different sizes. To reduce the variations of multi-scale training images, by referring to adversarial domain adaptation, we propose a pseudo domain adaptation framework for feature alignment. By constructing loss functions based on the segment-level shape similarity measure, combining with the adversarial loss function, the proposed shape-aware adversarial learning framework enables one single deep learning model for gland instance segmentation. Experimental results on the 2015 MICCAI Gland Challenge dataset demonstrate that the proposed framework achieves state-of-the-art performance with one single deep learning model. As the boundary uncertainty problem widely exists in medical image segmentation, it is broadly applicable to other applications.
Collapse
|
42
|
Ding H, Pan Z, Cen Q, Li Y, Chen S. Multi-scale fully convolutional network for gland segmentation using three-class classification. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.097] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
43
|
FABnet: feature attention-based network for simultaneous segmentation of microvessels and nerves in routine histology images of oral cancer. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04516-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
44
|
Van Eycke YR, Foucart A, Decaestecker C. Strategies to Reduce the Expert Supervision Required for Deep Learning-Based Segmentation of Histopathological Images. Front Med (Lausanne) 2019; 6:222. [PMID: 31681779 PMCID: PMC6803466 DOI: 10.3389/fmed.2019.00222] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 09/27/2019] [Indexed: 12/21/2022] Open
Abstract
The emergence of computational pathology comes with a demand to extract more and more information from each tissue sample. Such information extraction often requires the segmentation of numerous histological objects (e.g., cell nuclei, glands, etc.) in histological slide images, a task for which deep learning algorithms have demonstrated their effectiveness. However, these algorithms require many training examples to be efficient and robust. For this purpose, pathologists must manually segment hundreds or even thousands of objects in histological images, i.e., a long, tedious and potentially biased task. The present paper aims to review strategies that could help provide the very large number of annotated images needed to automate the segmentation of histological images using deep learning. This review identifies and describes four different approaches: the use of immunohistochemical markers as labels, realistic data augmentation, Generative Adversarial Networks (GAN), and transfer learning. In addition, we describe alternative learning strategies that can use imperfect annotations. Adding real data with high-quality annotations to the training set is a safe way to improve the performance of a well configured deep neural network. However, the present review provides new perspectives through the use of artificially generated data and/or imperfect annotations, in addition to transfer learning opportunities.
Collapse
Affiliation(s)
- Yves-Rémi Van Eycke
- Digital Image Analysis in Pathology (DIAPath), Center for Microscopy and Molecular Imaging (CMMI), Université Libre de Bruxelles, Charleroi, Belgium
- Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| | - Adrien Foucart
- Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| | - Christine Decaestecker
- Digital Image Analysis in Pathology (DIAPath), Center for Microscopy and Molecular Imaging (CMMI), Université Libre de Bruxelles, Charleroi, Belgium
- Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| |
Collapse
|
45
|
Liu J, Shen C, Liu T, Aguilera N, Tam J. Deriving Visual Cues from Deep Learning to Achieve Subpixel Cell Segmentation in Adaptive Optics Retinal Images. OPHTHALMIC MEDICAL IMAGE ANALYSIS : 6TH INTERNATIONAL WORKSHOP, OMIA 2019, HELD IN CONJUNCTION WITH MICCAI 2019, SHENZHEN, CHINA, OCTOBER 17, PROCEEDINGS. OMIA (WORKSHOP) (6TH : 2019 : SHENZHEN SHI, CHINA) 2019; 11855:86-94. [PMID: 31701095 PMCID: PMC6837169 DOI: 10.1007/978-3-030-32956-3_11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/20/2023]
Abstract
Direct visualization of photoreceptor cells, specialized neurons in the eye that sense light, can be achieved using adaptive optics (AO) retinal imaging. Evaluating photoreceptor cell morphology in retinal diseases is important for monitoring the onset and progression of blindness, but segmentation of these cells is a critical first step. Most segmentation approaches focus on cell region extraction, without directly considering cell boundary localization. This makes it difficult to track cells that have ambiguous boundaries, which result from low image contrast, anisotropic cell regions, or densely-packed cells whose boundaries appear to touch each other. These are all characteristics of the AO images that we consider here. To address these challenges, we develop an AOSeg-Net method that uses a multi-channel U-Net to predict the spatial probabilities of the cell boundary and obtain cell centroid and region distribution information as a means for facilitating cell segmentation. Five-color theorem guarantees the separation of any touching cells. Finally, a region-based level set algorithm that combines all of these visual cues is used to achieve subpixel cell segmentation. Five-fold cross-validation on 428 high resolution retinal images from 23 human subjects showed that AOSegNet substantially outperformed the only other existing approach with Dice coefficients [%] of 84.7 and 78.4, respectively, and average symmetric contour distances [μm] of 0.59 and 0.80, respectively.
Collapse
Affiliation(s)
- Jianfei Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Christine Shen
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Tao Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Nancy Aguilera
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Johnny Tam
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
46
|
Payer C, Štern D, Feiner M, Bischof H, Urschler M. Segmenting and tracking cell instances with cosine embeddings and recurrent hourglass networks. Med Image Anal 2019; 57:106-119. [PMID: 31299493 DOI: 10.1016/j.media.2019.06.015] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 06/05/2019] [Accepted: 06/26/2019] [Indexed: 11/28/2022]
Abstract
Differently to semantic segmentation, instance segmentation assigns unique labels to each individual instance of the same object class. In this work, we propose a novel recurrent fully convolutional network architecture for tracking such instance segmentations over time, which is highly relevant, e.g., in biomedical applications involving cell growth and migration. Our network architecture incorporates convolutional gated recurrent units (ConvGRU) into a stacked hourglass network to utilize temporal information, e.g., from microscopy videos. Moreover, we train our network with a novel embedding loss based on cosine similarities, such that the network predicts unique embeddings for every instance throughout videos, even in the presence of dynamic structural changes due to mitosis of cells. To create the final tracked instance segmentations, the pixel-wise embeddings are clustered among subsequent video frames by using the mean shift algorithm. After showing the performance of the instance segmentation on a static in-house dataset of muscle fibers from H&E-stained microscopy images, we also evaluate our proposed recurrent stacked hourglass network regarding instance segmentation and tracking performance on six datasets from the ISBI celltracking challenge, where it delivers state-of-the-art results.
Collapse
Affiliation(s)
- Christian Payer
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Darko Štern
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, Austria
| | - Marlies Feiner
- Division of Phoniatrics, Medical University Graz, Graz, Austria
| | - Horst Bischof
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Martin Urschler
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, Austria; Department of Computer Science, The University of Auckland, New Zealand.
| |
Collapse
|
47
|
Kuok CP, Horng MH, Liao YM, Chow NH, Sun YN. An effective and accurate identification system of Mycobacterium tuberculosis using convolution neural networks. Microsc Res Tech 2019; 82:709-719. [PMID: 30741460 DOI: 10.1002/jemt.23217] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Revised: 11/09/2018] [Accepted: 12/15/2018] [Indexed: 12/27/2022]
Abstract
Tuberculosis (TB) remains the leading cause of morbidity and mortality from infectious disease in developing countries. The sputum smear microscopy remains the primary diagnostic laboratory test. However, microscopic examination is always time-consuming and tedious. Therefore, an effective computer-aided image identification system is needed to provide timely assistance in diagnosis. The current identification system usually suffers from complex color variations of the images, resulting in plentiful of false object detection. To overcome the dilemma, we propose a two-stage Mycobacterium tuberculosis identification system, consisting of candidate detection and classification using convolution neural networks (CNNs). The refined Faster region-based CNN was used to distinguish candidates of M. tuberculosis and the actual ones were classified by utilizing CNN-based classifier. We first compared three different CNNs, including ensemble CNN, single-member CNN, and deep CNN. The experimental results showed that both ensemble and deep CNNs were on par with similar identification performance when analyzing more than 19,000 images. A much better recall value was achieved by using our proposed system in comparison with conventional pixel-based support vector machine method for M. tuberculosis bacilli detection.
Collapse
Affiliation(s)
- Chan-Pang Kuok
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Ming-Huwi Horng
- Department of Computer Science and Information Engineering, National Pingtung University, Pingtung, Taiwan
| | - Yu-Ming Liao
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Nan-Haw Chow
- Department of Pathology, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Yung-Nien Sun
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan.,MOST AI Biomedical Research Center, Tainan, Taiwan
| |
Collapse
|
48
|
Micro-Net: A unified model for segmentation of various objects in microscopy images. Med Image Anal 2019; 52:160-173. [DOI: 10.1016/j.media.2018.12.003] [Citation(s) in RCA: 89] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Revised: 12/13/2018] [Accepted: 12/14/2018] [Indexed: 11/23/2022]
|
49
|
Graham S, Chen H, Gamper J, Dou Q, Heng PA, Snead D, Tsang YW, Rajpoot N. MILD-Net: Minimal information loss dilated network for gland instance segmentation in colon histology images. Med Image Anal 2018; 52:199-211. [PMID: 30594772 DOI: 10.1016/j.media.2018.12.001] [Citation(s) in RCA: 115] [Impact Index Per Article: 19.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 12/04/2018] [Accepted: 12/14/2018] [Indexed: 02/08/2023]
Abstract
The analysis of glandular morphology within colon histopathology images is an important step in determining the grade of colon cancer. Despite the importance of this task, manual segmentation is laborious, time-consuming and can suffer from subjectivity among pathologists. The rise of computational pathology has led to the development of automated methods for gland segmentation that aim to overcome the challenges of manual segmentation. However, this task is non-trivial due to the large variability in glandular appearance and the difficulty in differentiating between certain glandular and non-glandular histological structures. Furthermore, a measure of uncertainty is essential for diagnostic decision making. To address these challenges, we propose a fully convolutional neural network that counters the loss of information caused by max-pooling by re-introducing the original image at multiple points within the network. We also use atrous spatial pyramid pooling with varying dilation rates for preserving the resolution and multi-level aggregation. To incorporate uncertainty, we introduce random transformations during test time for an enhanced segmentation result that simultaneously generates an uncertainty map, highlighting areas of ambiguity. We show that this map can be used to define a metric for disregarding predictions with high uncertainty. The proposed network achieves state-of-the-art performance on the GlaS challenge dataset and on a second independent colorectal adenocarcinoma dataset. In addition, we perform gland instance segmentation on whole-slide images from two further datasets to highlight the generalisability of our method. As an extension, we introduce MILD-Net+ for simultaneous gland and lumen segmentation, to increase the diagnostic power of the network.
Collapse
Affiliation(s)
- Simon Graham
- Mathematics for Real World Systems Centre for Doctoral Training, University of Warwick, Coventry, CV4 7AL, UK; Department of Computer Science, University of Warwick, UK.
| | - Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Jevgenij Gamper
- Mathematics for Real World Systems Centre for Doctoral Training, University of Warwick, Coventry, CV4 7AL, UK; Department of Computer Science, University of Warwick, UK
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - David Snead
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Yee Wah Tsang
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, UK; Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK; The Alan Turing Institute, London, UK
| |
Collapse
|
50
|
Van Eycke YR, Balsat C, Verset L, Debeir O, Salmon I, Decaestecker C. Segmentation of glandular epithelium in colorectal tumours to automatically compartmentalise IHC biomarker quantification: A deep learning approach. Med Image Anal 2018; 49:35-45. [PMID: 30081241 DOI: 10.1016/j.media.2018.07.004] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Revised: 06/29/2018] [Accepted: 07/05/2018] [Indexed: 12/18/2022]
Abstract
In this paper, we propose a method for automatically annotating slide images from colorectal tissue samples. Our objective is to segment glandular epithelium in histological images from tissue slides submitted to different staining techniques, including usual haematoxylin-eosin (H&E) as well as immunohistochemistry (IHC). The proposed method makes use of Deep Learning and is based on a new convolutional network architecture. Our method achieves better performances than the state of the art on the H&E images of the GlaS challenge contest, whereas it uses only the haematoxylin colour channel extracted by colour deconvolution from the RGB images in order to extend its applicability to IHC. The network only needs to be fine-tuned on a small number of additional examples to be accurate on a new IHC dataset. Our approach also includes a new method of data augmentation to achieve good generalisation when working with different experimental conditions and different IHC markers. We show that our methodology enables to automate the compartmentalisation of the IHC biomarker analysis, results concurring highly with manual annotations.
Collapse
Affiliation(s)
- Yves-Rémi Van Eycke
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium; Laboratories of Image, Signal processing & Acoustics, Université Libre de Bruxelles (ULB), CPI 165/57, Avenue Franklin Roosevelt 50, Brussels 1050 , Belgium.
| | - Cédric Balsat
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium
| | - Laurine Verset
- Department of Pathology, Erasme Hospital, Université Libre de Bruxelles (ULB), Route de Lennik 808, Brussels 1070, Belgium
| | - Olivier Debeir
- Laboratories of Image, Signal processing & Acoustics, Université Libre de Bruxelles (ULB), CPI 165/57, Avenue Franklin Roosevelt 50, Brussels 1050 , Belgium; MIP, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium
| | - Isabelle Salmon
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium; Department of Pathology, Erasme Hospital, Université Libre de Bruxelles (ULB), Route de Lennik 808, Brussels 1070, Belgium
| | - Christine Decaestecker
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium; Laboratories of Image, Signal processing & Acoustics, Université Libre de Bruxelles (ULB), CPI 165/57, Avenue Franklin Roosevelt 50, Brussels 1050 , Belgium.
| |
Collapse
|