101
|
He K, Lian C, Adeli E, Huo J, Gao Y, Zhang B, Zhang J, Shen D. MetricUNet: Synergistic image- and voxel-level learning for precise prostate segmentation via online sampling. Med Image Anal 2021; 71:102039. [PMID: 33831595 DOI: 10.1016/j.media.2021.102039] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 02/13/2021] [Accepted: 03/09/2021] [Indexed: 10/21/2022]
Abstract
Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.
Collapse
Affiliation(s)
- Kelei He
- Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China
| | - Chunfeng Lian
- School of Mathematics and Statistics, Xi'an Jiaotong University, Shanxi, China
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences and the Department of Computer Science, Stanford University, CA, USA
| | - Jing Huo
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Yang Gao
- National Institute of Healthcare Data Science at Nanjing University, Nanjing, China; State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Bing Zhang
- Department of Radiology, Nanjing Drum Tower Hospital, Nanjing University Medical School, Nanjing, China
| | - Junfeng Zhang
- Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
102
|
He H, Zhang C, Chen J, Geng R, Chen L, Liang Y, Lu Y, Wu J, Xu Y. A Hybrid-Attention Nested UNet for Nuclear Segmentation in Histopathological Images. Front Mol Biosci 2021; 8:614174. [PMID: 33681291 PMCID: PMC7925890 DOI: 10.3389/fmolb.2021.614174] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 01/06/2021] [Indexed: 11/13/2022] Open
Abstract
Nuclear segmentation of histopathological images is a crucial step in computer-aided image analysis. There are complex, diverse, dense, and even overlapping nuclei in these histopathological images, leading to a challenging task of nuclear segmentation. To overcome this challenge, this paper proposes a hybrid-attention nested UNet (Han-Net), which consists of two modules: a hybrid nested U-shaped network (H-part) and a hybrid attention block (A-part). H-part combines a nested multi-depth U-shaped network and a dense network with full resolution to capture more effective features. A-part is used to explore attention information and build correlations between different pixels. With these two modules, Han-Net extracts discriminative features, which effectively segment the boundaries of not only complex and diverse nuclei but also small and dense nuclei. The comparison in a publicly available multi-organ dataset shows that the proposed model achieves the state-of-the-art performance compared to other models.
Collapse
Affiliation(s)
- Hongliang He
- School of Electronic and Computer Engineering, Peking University, Shenzhen, China.,Peng Cheng Laboratory, Shenzhen, China
| | - Chi Zhang
- School of Electronic and Computer Engineering, Peking University, Shenzhen, China.,Peng Cheng Laboratory, Shenzhen, China
| | - Jie Chen
- School of Electronic and Computer Engineering, Peking University, Shenzhen, China.,Peng Cheng Laboratory, Shenzhen, China
| | - Ruizhe Geng
- School of Electronic and Computer Engineering, Peking University, Shenzhen, China
| | - Luyang Chen
- College of Engineering, Pennsylvania State University, State College, PA, United States
| | | | - Yanchang Lu
- Beijing Normal University-Hong Kong Baptist University United International College, Zhuhai, China
| | - Jihua Wu
- PLA Strategic Support Force Characteristic Medical Center, Beijing, China
| | - Yongjie Xu
- PLA Strategic Support Force Characteristic Medical Center, Beijing, China
| |
Collapse
|
103
|
Li Z, Zhang J, Tan T, Teng X, Sun X, Zhao H, Liu L, Xiao Y, Lee B, Li Y, Zhang Q, Sun S, Zheng Y, Yan J, Li N, Hong Y, Ko J, Jung H, Liu Y, Chen YC, Wang CW, Yurovskiy V, Maevskikh P, Khanagha V, Jiang Y, Yu L, Liu Z, Li D, Schuffler PJ, Yu Q, Chen H, Tang Y, Litjens G. Deep Learning Methods for Lung Cancer Segmentation in Whole-Slide Histopathology Images-The ACDC@LungHP Challenge 2019. IEEE J Biomed Health Inform 2021; 25:429-440. [PMID: 33216724 DOI: 10.1109/jbhi.2020.3039741] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Accurate segmentation of lung cancer in pathology slides is a critical step in improving patient care. We proposed the ACDC@LungHP (Automatic Cancer Detection and Classification in Whole-slide Lung Histopathology) challenge for evaluating different computer-aided diagnosis (CADs) methods on the automatic diagnosis of lung cancer. The ACDC@LungHP 2019 focused on segmentation (pixel-wise detection) of cancer tissue in whole slide imaging (WSI), using an annotated dataset of 150 training images and 50 test images from 200 patients. This paper reviews this challenge and summarizes the top 10 submitted methods for lung cancer segmentation. All methods were evaluated using metrics using the precision, accuracy, sensitivity, specificity, and DICE coefficient (DC). The DC ranged from 0.7354 ±0.1149 to 0.8372 ±0.0858. The DC of the best method was close to the inter-observer agreement (0.8398 ±0.0890). All methods were based on deep learning and categorized into two groups: multi-model method and single model method. In general, multi-model methods were significantly better (p 0.01) than single model methods, with mean DC of 0.7966 and 0.7544, respectively. Deep learning based methods could potentially help pathologists find suspicious regions for further analysis of lung cancer in WSI.
Collapse
|
104
|
Lin H, Chen H, Wang X, Wang Q, Wang L, Heng PA. Dual-path network with synergistic grouping loss and evidence driven risk stratification for whole slide cervical image analysis. Med Image Anal 2021; 69:101955. [PMID: 33588122 DOI: 10.1016/j.media.2021.101955] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 12/28/2020] [Accepted: 01/02/2021] [Indexed: 12/26/2022]
Abstract
Cervical cancer has been one of the most lethal cancers threatening women's health. Nevertheless, the incidence of cervical cancer can be effectively minimized with preventive clinical management strategies, including vaccines and regular screening examinations. Screening cervical smears under microscope by cytologist is a widely used routine in regular examination, which consumes cytologists' large amount of time and labour. Computerized cytology analysis appropriately caters to such an imperative need, which alleviates cytologists' workload and reduce potential misdiagnosis rate. However, automatic analysis of cervical smear via digitalized whole slide images (WSIs) remains a challenging problem, due to the extreme huge image resolution, existence of tiny lesions, noisy dataset and intricate clinical definition of classes with fuzzy boundaries. In this paper, we design an efficient deep convolutional neural network (CNN) with dual-path (DP) encoder for lesion retrieval, which ensures the inference efficiency and the sensitivity on both tiny and large lesions. Incorporated with synergistic grouping loss (SGL), the network can be effectively trained on noisy dataset with fuzzy inter-class boundaries. Inspired by the clinical diagnostic criteria from the cytologists, a novel smear-level classifier, i.e., rule-based risk stratification (RRS), is proposed for accurate smear-level classification and risk stratification, which aligns reasonably with intricate cytological definition of the classes. Extensive experiments on the largest dataset including 19,303 WSIs from multiple medical centers validate the robustness of our method. With high sensitivity of 0.907 and specificity of 0.80 being achieved, our method manifests the potential to reduce the workload for cytologists in the routine practice.
Collapse
Affiliation(s)
- Huangjing Lin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Xi Wang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qiong Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| | - Liansheng Wang
- Department of Computer Science, Xiamen University, Xiamen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
105
|
|
106
|
Wang J, Zhu H, Wang SH, Zhang YD. A Review of Deep Learning on Medical Image Analysis. MOBILE NETWORKS AND APPLICATIONS 2021; 26:351-380. [DOI: 10.1007/s11036-020-01672-7] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 08/30/2023]
|
107
|
Xie X, Niu J, Liu X, Chen Z, Tang S, Yu S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med Image Anal 2021; 69:101985. [PMID: 33588117 DOI: 10.1016/j.media.2021.101985] [Citation(s) in RCA: 87] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 12/04/2020] [Accepted: 01/26/2021] [Indexed: 12/27/2022]
Abstract
Although deep learning models like CNNs have achieved great success in medical image analysis, the small size of medical datasets remains a major bottleneck in this area. To address this problem, researchers have started looking for external information beyond current available medical datasets. Traditional approaches generally leverage the information from natural images via transfer learning. More recent works utilize the domain knowledge from medical doctors, to create networks that resemble how medical doctors are trained, mimic their diagnostic patterns, or focus on the features or areas they pay particular attention to. In this survey, we summarize the current progress on integrating medical domain knowledge into deep learning models for various tasks, such as disease diagnosis, lesion, organ and abnormality detection, lesion and organ segmentation. For each task, we systematically categorize different kinds of medical domain knowledge that have been utilized and their corresponding integrating methods. We also provide current challenges and directions for future research.
Collapse
Affiliation(s)
- Xiaozheng Xie
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Jianwei Niu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC) and Hangzhou Innovation Institute of Beihang University, 18 Chuanghui Street, Binjiang District, Hangzhou 310000, China
| | - Xuefeng Liu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China.
| | - Zhengsu Chen
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Shaojie Tang
- Jindal School of Management, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080-3021, USA
| | - Shui Yu
- School of Computer Science, University of Technology Sydney, 15 Broadway, Ultimo NSW 2007, Australia
| |
Collapse
|
108
|
Liver segmentation in abdominal CT images via auto-context neural network and self-supervised contour attention. Artif Intell Med 2021; 113:102023. [PMID: 33685586 DOI: 10.1016/j.artmed.2021.102023] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 11/13/2020] [Accepted: 01/18/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE Accurate image segmentation of the liver is a challenging problem owing to its large shape variability and unclear boundaries. Although the applications of fully convolutional neural networks (CNNs) have shown groundbreaking results, limited studies have focused on the performance of generalization. In this study, we introduce a CNN for liver segmentation on abdominal computed tomography (CT) images that focus on the performance of generalization and accuracy. METHODS To improve the generalization performance, we initially propose an auto-context algorithm in a single CNN. The proposed auto-context neural network exploits an effective high-level residual estimation to obtain the shape prior. Identical dual paths are effectively trained to represent mutual complementary features for an accurate posterior analysis of a liver. Further, we extend our network by employing a self-supervised contour scheme. We trained sparse contour features by penalizing the ground-truth contour to focus more contour attentions on the failures. RESULTS We used 180 abdominal CT images for training and validation. Two-fold cross-validation is presented for a comparison with the state-of-the-art neural networks. The experimental results show that the proposed network results in better accuracy when compared to the state-of-the-art networks by reducing 10.31% of the Hausdorff distance. Novel multiple N-fold cross-validations are conducted to show the best performance of generalization of the proposed network. CONCLUSION AND SIGNIFICANCE The proposed method minimized the error between training and test images more than any other modern neural networks. Moreover, the contour scheme was successfully employed in the network by introducing a self-supervising metric.
Collapse
|
109
|
Cai Y, Yu JG, Chen Y, Liu C, Xiao L, M Grais E, Zhao F, Lan L, Zeng S, Zeng J, Wu M, Su Y, Li Y, Zheng Y. Investigating the use of a two-stage attention-aware convolutional neural network for the automated diagnosis of otitis media from tympanic membrane images: a prediction model development and validation study. BMJ Open 2021; 11:e041139. [PMID: 33478963 PMCID: PMC7825258 DOI: 10.1136/bmjopen-2020-041139] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 12/18/2020] [Accepted: 12/28/2020] [Indexed: 12/03/2022] Open
Abstract
OBJECTIVES This study investigated the usefulness and performance of a two-stage attention-aware convolutional neural network (CNN) for the automated diagnosis of otitis media from tympanic membrane (TM) images. DESIGN A classification model development and validation study in ears with otitis media based on otoscopic TM images. Two commonly used CNNs were trained and evaluated on the dataset. On the basis of a Class Activation Map (CAM), a two-stage classification pipeline was developed to improve accuracy and reliability, and simulate an expert reading the TM images. SETTING AND PARTICIPANTS This is a retrospective study using otoendoscopic images obtained from the Department of Otorhinolaryngology in China. A dataset was generated with 6066 otoscopic images from 2022 participants comprising four kinds of TM images, that is, normal eardrum, otitis media with effusion (OME) and two stages of chronic suppurative otitis media (CSOM). RESULTS The proposed method achieved an overall accuracy of 93.4% using ResNet50 as the backbone network in a threefold cross-validation. The F1 Score of classification for normal images was 94.3%, and 96.8% for OME. There was a small difference between the active and inactive status of CSOM, achieving 91.7% and 82.4% F1 scores, respectively. The results demonstrate a classification performance equivalent to the diagnosis level of an associate professor in otolaryngology. CONCLUSIONS CNNs provide a useful and effective tool for the automated classification of TM images. In addition, having a weakly supervised method such as CAM can help the network focus on discriminative parts of the image and improve performance with a relatively small database. This two-stage method is beneficial to improve the accuracy of diagnosis of otitis media for junior otolaryngologists and physicians in other disciplines.
Collapse
Affiliation(s)
- Yuexin Cai
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Jin-Gang Yu
- Department of Automation Science and Engineering, South China University of Technology School, Guangzhou, Guangdong, China
| | - Yuebo Chen
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Chu Liu
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Lichao Xiao
- Department of Automation Science and Engineering, South China University of Technology School, Guangzhou, Guangdong, China
| | - Emad M Grais
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Fei Zhao
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Liping Lan
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Shengxin Zeng
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Junbo Zeng
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Minjian Wu
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Yuejia Su
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Yuanqing Li
- Department of Automation Science and Engineering, South China University of Technology School, Guangzhou, Guangdong, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| |
Collapse
|
110
|
Liu D, Zhang D, Song Y, Huang H, Cai W. Panoptic Feature Fusion Net: A Novel Instance Segmentation Paradigm for Biomedical and Biological Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:2045-2059. [PMID: 33449878 DOI: 10.1109/tip.2021.3050668] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Instance segmentation is an important task for biomedical and biological image analysis. Due to the complicated background components, the high variability of object appearances, numerous overlapping objects, and ambiguous object boundaries, this task still remains challenging. Recently, deep learning based methods have been widely employed to solve these problems and can be categorized into proposal-free and proposal-based methods. However, both proposal-free and proposal-based methods suffer from information loss, as they focus on either global-level semantic or local-level instance features. To tackle this issue, we present a Panoptic Feature Fusion Net (PFFNet) that unifies the semantic and instance features in this work. Specifically, our proposed PFFNet contains a residual attention feature fusion mechanism to incorporate the instance prediction with the semantic features, in order to facilitate the semantic contextual information learning in the instance branch. Then, a mask quality sub-branch is designed to align the confidence score of each object with the quality of the mask prediction. Furthermore, a consistency regularization mechanism is designed between the semantic segmentation tasks in the semantic and instance branches, for the robust learning of both tasks. Extensive experiments demonstrate the effectiveness of our proposed PFFNet, which outperforms several state-of-the-art methods on various biomedical and biological datasets.
Collapse
|
111
|
Hu T, Xu X, Chen S, Liu Q. Accurate Neuronal Soma Segmentation Using 3D Multi-Task Learning U-Shaped Fully Convolutional Neural Networks. Front Neuroanat 2021; 14:592806. [PMID: 33551758 PMCID: PMC7860594 DOI: 10.3389/fnana.2020.592806] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Accepted: 12/02/2020] [Indexed: 12/12/2022] Open
Abstract
Neuronal soma segmentation is a crucial step for the quantitative analysis of neuronal morphology. Automated neuronal soma segmentation methods have opened up the opportunity to improve the time-consuming manual labeling required during the neuronal soma morphology reconstruction for large-scale images. However, the presence of touching neuronal somata and variable soma shapes in images brings challenges for automated algorithms. This study proposes a neuronal soma segmentation method combining 3D U-shaped fully convolutional neural networks with multi-task learning. Compared to existing methods, this technique applies multi-task learning to predict the soma boundary to split touching somata, and adopts U-shaped architecture convolutional neural network which is effective for a limited dataset. The contour-aware multi-task learning framework is applied to the proposed method to predict the masks of neuronal somata and boundaries simultaneously. In addition, a spatial attention module is embedded into the multi-task model to improve neuronal soma segmentation results. The Nissl-stained dataset captured by the micro-optical sectioning tomography system is used to validate the proposed method. Following comparison to four existing segmentation models, the proposed method outperforms the others notably in both localization and segmentation. The novel method has potential for high-throughput neuronal soma segmentation in large-scale optical imaging data for neuron morphology quantitative analysis.
Collapse
Affiliation(s)
- Tianyu Hu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaofeng Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shangbin Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Qian Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,School of Biomedical Engineering, Hainan University, Haikou, China
| |
Collapse
|
112
|
Esteva A, Chou K, Yeung S, Naik N, Madani A, Mottaghi A, Liu Y, Topol E, Dean J, Socher R. Deep learning-enabled medical computer vision. NPJ Digit Med 2021; 4:5. [PMID: 33420381 PMCID: PMC7794558 DOI: 10.1038/s41746-020-00376-2] [Citation(s) in RCA: 256] [Impact Index Per Article: 85.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 12/01/2020] [Indexed: 02/07/2023] Open
Abstract
A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields-including medicine-to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit-including cardiology, pathology, dermatology, ophthalmology-and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.
Collapse
Affiliation(s)
| | | | | | - Nikhil Naik
- Salesforce AI Research, San Francisco, CA, USA
| | - Ali Madani
- Salesforce AI Research, San Francisco, CA, USA
| | | | - Yun Liu
- Google Research, Mountain View, CA, USA
| | - Eric Topol
- Scripps Research Translational Institute, La Jolla, CA, USA
| | - Jeff Dean
- Google Research, Mountain View, CA, USA
| | | |
Collapse
|
113
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 89] [Impact Index Per Article: 29.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
114
|
Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Med Image Anal 2021; 67:101813. [PMID: 33049577 PMCID: PMC7725956 DOI: 10.1016/j.media.2020.101813] [Citation(s) in RCA: 212] [Impact Index Per Article: 70.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 05/12/2020] [Accepted: 08/09/2020] [Indexed: 12/14/2022]
Abstract
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
Collapse
Affiliation(s)
- Chetan L Srinidhi
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada.
| | - Ozan Ciga
- Department of Medical Biophysics, University of Toronto, Canada
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada
| |
Collapse
|
115
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
116
|
Bouteldja N, Klinkhammer BM, Bülow RD, Droste P, Otten SW, Freifrau von Stillfried S, Moellmann J, Sheehan SM, Korstanje R, Menzel S, Bankhead P, Mietsch M, Drummer C, Lehrke M, Kramann R, Floege J, Boor P, Merhof D. Deep Learning-Based Segmentation and Quantification in Experimental Kidney Histopathology. J Am Soc Nephrol 2021; 32:52-68. [PMID: 33154175 PMCID: PMC7894663 DOI: 10.1681/asn.2020050597] [Citation(s) in RCA: 89] [Impact Index Per Article: 29.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Accepted: 09/09/2020] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND Nephropathologic analyses provide important outcomes-related data in experiments with the animal models that are essential for understanding kidney disease pathophysiology. Precision medicine increases the demand for quantitative, unbiased, reproducible, and efficient histopathologic analyses, which will require novel high-throughput tools. A deep learning technique, the convolutional neural network, is increasingly applied in pathology because of its high performance in tasks like histology segmentation. METHODS We investigated use of a convolutional neural network architecture for accurate segmentation of periodic acid-Schiff-stained kidney tissue from healthy mice and five murine disease models and from other species used in preclinical research. We trained the convolutional neural network to segment six major renal structures: glomerular tuft, glomerulus including Bowman's capsule, tubules, arteries, arterial lumina, and veins. To achieve high accuracy, we performed a large number of expert-based annotations, 72,722 in total. RESULTS Multiclass segmentation performance was very high in all disease models. The convolutional neural network allowed high-throughput and large-scale, quantitative and comparative analyses of various models. In disease models, computational feature extraction revealed interstitial expansion, tubular dilation and atrophy, and glomerular size variability. Validation showed a high correlation of findings with current standard morphometric analysis. The convolutional neural network also showed high performance in other species used in research-including rats, pigs, bears, and marmosets-as well as in humans, providing a translational bridge between preclinical and clinical studies. CONCLUSIONS We developed a deep learning algorithm for accurate multiclass segmentation of digital whole-slide images of periodic acid-Schiff-stained kidneys from various species and renal disease models. This enables reproducible quantitative histopathologic analyses in preclinical models that also might be applicable to clinical studies.
Collapse
Affiliation(s)
- Nassim Bouteldja
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Barbara M. Klinkhammer
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany,Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Roman D. Bülow
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Patrick Droste
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Simon W. Otten
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | | | - Julia Moellmann
- Department of Cardiology and Vascular Medicine, RWTH Aachen University Hospital, Aachen, Germany
| | | | | | - Sylvia Menzel
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Peter Bankhead
- Edinburgh Pathology, University of Edinburgh, Edinburgh, United Kingdom,Institute of Genetics and Molecular Medicine, University of Edinburgh, Edinburgh, United Kingdom
| | - Matthias Mietsch
- Laboratory Animal Science Unit, German Primate Center, Goettingen, Germany
| | - Charis Drummer
- Platform Degenerative Diseases, German Primate Center, Goettingen, Germany
| | - Michael Lehrke
- Department of Cardiology and Vascular Medicine, RWTH Aachen University Hospital, Aachen, Germany
| | - Rafael Kramann
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany,Department of Internal Medicine, Nephrology and Transplantation, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Jürgen Floege
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Peter Boor
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany,Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Dorit Merhof
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany,Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
117
|
Loo J, Kriegel MF, Tuohy MM, Kim KH, Prajna V, Woodward MA, Farsiu S. Open-Source Automatic Segmentation of Ocular Structures and Biomarkers of Microbial Keratitis on Slit-Lamp Photography Images Using Deep Learning. IEEE J Biomed Health Inform 2021; 25:88-99. [PMID: 32248131 PMCID: PMC7781042 DOI: 10.1109/jbhi.2020.2983549] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
We propose a fully-automatic deep learning-based algorithm for segmentation of ocular structures and microbial keratitis (MK) biomarkers on slit-lamp photography (SLP) images. The dataset consisted of SLP images from 133 eyes with manual annotations by a physician, P1. A modified region-based convolutional neural network, SLIT-Net, was developed and trained using P1's annotations to identify and segment four pathological regions of interest (ROIs) on diffuse white light images (stromal infiltrate (SI), hypopyon, white blood cell (WBC) border, corneal edema border), one pathological ROI on diffuse blue light images (epithelial defect (ED)), and two non-pathological ROIs on all images (corneal limbus, light reflexes). To assess inter-reader variability, 75 eyes were manually annotated for pathological ROIs by a second physician, P2. Performance was evaluated using the Dice similarity coefficient (DSC) and Hausdorff distance (HD). Using seven-fold cross-validation, the DSC of the algorithm (as compared to P1) for all ROIs was good (range: 0.62-0.95) on all 133 eyes. For the subset of 75 eyes with manual annotations by P2, the DSC for pathological ROIs ranged from 0.69-0.85 (SLIT-Net) vs. 0.37-0.92 (P2). DSCs for SLIT-Net were not significantly different than P2 for segmenting hypopyons (p > 0.05) and higher than P2 for WBCs (p < 0.001) and edema (p < 0.001). DSCs were higher for P2 for segmenting SIs (p < 0.001) and EDs (p < 0.001). HDs were lower for P2 for segmenting SIs (p = 0.005) and EDs (p < 0.001) and not significantly different for hypopyons (p > 0.05), WBCs (p > 0.05), and edema (p > 0.05). This prototype fully-automatic algorithm to segment MK biomarkers on SLP images performed to expectations on an exploratory dataset and holds promise for quantification of corneal physiology and pathology.
Collapse
|
118
|
Guo Z, Zhang H, Chen Z, van der Plas E, Gutmann L, Thedens D, Nopoulos P, Sonka M. Fully automated 3D segmentation of MR-imaged calf muscle compartments: Neighborhood relationship enhanced fully convolutional network. Comput Med Imaging Graph 2021; 87:101835. [PMID: 33373972 PMCID: PMC7855601 DOI: 10.1016/j.compmedimag.2020.101835] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 08/26/2020] [Accepted: 11/17/2020] [Indexed: 11/24/2022]
Abstract
Automated segmentation of individual calf muscle compartments from 3D magnetic resonance (MR) images is essential for developing quantitative biomarkers for muscular disease progression and its prediction. Achieving clinically acceptable results is a challenging task due to large variations in muscle shape and MR appearance. In this paper, we present a novel fully convolutional network (FCN) that utilizes contextual information in a large neighborhood and embeds edge-aware constraints for individual calf muscle compartment segmentations. An encoder-decoder architecture is used to systematically enlarge convolution receptive field and preserve information at all resolutions. Edge positions derived from the FCN output muscle probability maps are explicitly regularized using kernel-based edge detection in an end-to-end optimization framework. Our method was evaluated on 40 T1-weighted MR images of 10 healthy and 30 diseased subjects by fourfold cross-validation. Mean DICE coefficients of 88.00-91.29% and mean absolute surface positioning errors of 1.04-1.66 mm were achieved for the five 3D muscle compartments.
Collapse
Affiliation(s)
- Zhihui Guo
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA 52242, USA.
| | - Honghai Zhang
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA 52242, USA
| | - Zhi Chen
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA 52242, USA
| | | | - Laurie Gutmann
- Department of Neurology, University of Iowa, Iowa City, IA 52242, USA
| | - Daniel Thedens
- Department of Psychiatry, University of Iowa, Iowa City, IA 52242, USA
| | - Peggy Nopoulos
- Department of Psychiatry, University of Iowa, Iowa City, IA 52242, USA
| | - Milan Sonka
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
119
|
Gunesli GN, Sokmensuer C, Gunduz-Demir C. AttentionBoost: Learning What to Attend for Gland Segmentation in Histopathological Images by Boosting Fully Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4262-4273. [PMID: 32780699 DOI: 10.1109/tmi.2020.3015198] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Fully convolutional networks (FCNs) are widely used for instance segmentation. One important challenge is to sufficiently train these networks to yield good generalizations for hard-to-learn pixels, correct prediction of which may greatly affect the success. A typical group of such hard-to-learn pixels are boundaries between instances. Many studies have developed strategies to pay more attention to learning these boundary pixels. They include designing multi-task networks with an additional task of boundary prediction and increasing the weights of boundary pixels' predictions in the loss function. Such strategies require defining what to attend beforehand and incorporating this defined attention to the learning model. However, there may exist other groups of hard-to-learn pixels and manually defining and incorporating the appropriate attention for each group may not be feasible. In order to provide an adaptable solution to learn different groups of hard-to-learn pixels, this article proposes AttentionBoost, which is a new multi-attention learning model based on adaptive boosting, for the task of gland instance segmentation in histopathological images. AttentionBoost designs a multi-stage network and introduces a new loss adjustment mechanism for an FCN to adaptively learn what to attend at each stage directly on image data without necessitating any prior definition. This mechanism modulates the attention of each stage to correct the mistakes of previous stages, by adjusting the loss weight of each pixel prediction separately with respect to how accurate the previous stages are on this pixel. Working on histopathological images of colon tissues, our experiments demonstrate that the proposed AttentionBoost model improves the results of gland segmentation compared to its counterparts.
Collapse
|
120
|
|
121
|
Roy M, Wang F, Vo H, Teng D, Teodoro G, Farris AB, Castillo-Leon E, Vos MB, Kong J. Deep-learning-based accurate hepatic steatosis quantification for histological assessment of liver biopsies. J Transl Med 2020; 100:1367-1383. [PMID: 32661341 PMCID: PMC7502534 DOI: 10.1038/s41374-020-0463-y] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 01/30/2020] [Accepted: 06/22/2020] [Indexed: 12/17/2022] Open
Abstract
Hepatic steatosis droplet quantification with histology biopsies has high clinical significance for risk stratification and management of patients with fatty liver diseases and in the decision to use donor livers for transplantation. However, pathology reviewing processes, when conducted manually, are subject to a high inter- and intra-reader variability, due to the overwhelmingly large number and significantly varying appearance of steatosis instances. This process is challenging as there is a large number of overlapped steatosis droplets with either missing or weak boundaries. In this study, we propose a deep-learning-based region-boundary integrated network for precise steatosis quantification with whole slide liver histopathology images. The proposed model consists of two sequential steps: a region extraction and a boundary prediction module for foreground regions and steatosis boundary prediction, followed by an integrated prediction map generation. Missing steatosis boundaries are next recovered from the predicted map and assembled from adjacent image patches to generate results for the whole slide histopathology image. The resulting steatosis measures both at the pixel level and steatosis object-level present strong correlation with pathologist annotations, radiology readouts and clinical data. In addition, the segregated steatosis object count is shown as a promising alternative measure to the traditional metrics at the pixel level. These results suggest a high potential of artificial intelligence-assisted technology to enhance liver disease decision support using whole slide images.
Collapse
Affiliation(s)
- Mousumi Roy
- Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Fusheng Wang
- Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA.
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, 11794, USA.
| | - Hoang Vo
- Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Dejun Teng
- Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA
| | - George Teodoro
- Department of Computer Science, Federal University of Minas Gerais, Belo Horizonte, MG, 31270, USA
| | - Alton B Farris
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Eduardo Castillo-Leon
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Pediatrics, Emory University, Atlanta, GA, 30322, USA
| | - Miriam B Vos
- Division of Gastroenterology, Hepatology, and Nutrition, Department of Pediatrics, Emory University, Atlanta, GA, 30322, USA
- Children's Healthcare of Atlanta, Atlanta, GA, 30322, USA
| | - Jun Kong
- Department of Mathematics and Statistics, Georgia State University, Atlanta, GA, 30303, USA.
- Department of Computer Science, Emory University, Atlanta, GA, 30322, USA.
- Department of Biomedical Informatics, Emory University, Atlanta, GA, 30322, USA.
- Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.
| |
Collapse
|
122
|
Vakanski A, Xian M, Freer PE. Attention-Enriched Deep Learning Model for Breast Tumor Segmentation in Ultrasound Images. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:2819-2833. [PMID: 32709519 PMCID: PMC7483681 DOI: 10.1016/j.ultrasmedbio.2020.06.015] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 06/12/2020] [Accepted: 06/19/2020] [Indexed: 05/07/2023]
Abstract
Incorporating human domain knowledge for breast tumor diagnosis is challenging because shape, boundary, curvature, intensity or other common medical priors vary significantly across patients and cannot be employed. This work proposes a new approach to integrating visual saliency into a deep learning model for breast tumor segmentation in ultrasound images. Visual saliency refers to image maps containing regions that are more likely to attract radiologists' visual attention. The proposed approach introduces attention blocks into a U-Net architecture and learns feature representations that prioritize spatial regions with high saliency levels. The validation results indicate increased accuracy for tumor segmentation relative to models without salient attention layers. The approach achieved a Dice similarity coefficient (DSC) of 90.5% on a data set of 510 images. The salient attention model has the potential to enhance accuracy and robustness in processing medical images of other organs, by providing a means to incorporate task-specific knowledge into deep learning architectures.
Collapse
Affiliation(s)
- Aleksandar Vakanski
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho, USA.
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho, USA
| | - Phoebe E Freer
- University of Utah School of Medicine, Salt Lake City, Utah, USA
| |
Collapse
|
123
|
|
124
|
Jackson CR, Sriharan A, Vaickus LJ. A machine learning algorithm for simulating immunohistochemistry: development of SOX10 virtual IHC and evaluation on primarily melanocytic neoplasms. Mod Pathol 2020; 33:1638-1648. [PMID: 32238879 PMCID: PMC10811656 DOI: 10.1038/s41379-020-0526-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 03/08/2020] [Accepted: 03/09/2020] [Indexed: 11/08/2022]
Abstract
Immunohistochemistry (IHC) is a diagnostic technique used throughout pathology. A machine learning algorithm that could predict individual cell immunophenotype based on hematoxylin and eosin (H&E) staining would save money, time, and reduce tissue consumed. Prior approaches have lacked the spatial accuracy needed for cell-specific analytical tasks. Here IHC performed on destained H&E slides is used to create a neural network that is potentially capable of predicting individual cell immunophenotype. Twelve slides were stained with H&E and scanned to create digital whole slide images. The H&E slides were then destained, and stained with SOX10 IHC. The SOX10 IHC slides were scanned, and corresponding H&E and IHC digital images were registered. Color-thresholding and machine learning techniques were applied to the registered H&E and IHC images to segment 3,396,668 SOX10-negative cells and 306,166 SOX10-positive cells. The resulting segmentation was used to annotate the original H&E images, and a convolutional neural network was trained to predict SOX10 nuclear staining. Sixteen thousand three hundred and nine image patches were used to train the virtual IHC (vIHC) neural network, and 1,813 image patches were used to quantitatively evaluate it. The resulting vIHC neural network achieved an area under the curve of 0.9422 in a receiver operator characteristics analysis when sorting individual nuclei. The vIHC network was applied to additional images from clinical practice, and was evaluated qualitatively by a board-certified dermatopathologist. Further work is needed to make the process more efficient and accurate for clinical use. This proof-of-concept demonstrates the feasibility of creating neural network-driven vIHC assays.
Collapse
Affiliation(s)
- Christopher R Jackson
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA.
| | - Aravindhan Sriharan
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Louis J Vaickus
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| |
Collapse
|
125
|
Wan T, Zhao L, Feng H, Li D, Tong C, Qin Z. Robust nuclei segmentation in histopathology using ASPPU-Net and boundary refinement. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.08.103] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
126
|
Wang X, Chen H, Gan C, Lin H, Dou Q, Tsougenis E, Huang Q, Cai M, Heng PA. Weakly Supervised Deep Learning for Whole Slide Lung Cancer Image Analysis. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:3950-3962. [PMID: 31484154 DOI: 10.1109/tcyb.2019.2935141] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Histopathology image analysis serves as the gold standard for cancer diagnosis. Efficient and precise diagnosis is quite critical for the subsequent therapeutic treatment of patients. So far, computer-aided diagnosis has not been widely applied in pathological field yet as currently well-addressed tasks are only the tip of the iceberg. Whole slide image (WSI) classification is a quite challenging problem. First, the scarcity of annotations heavily impedes the pace of developing effective approaches. Pixelwise delineated annotations on WSIs are time consuming and tedious, which poses difficulties in building a large-scale training dataset. In addition, a variety of heterogeneous patterns of tumor existing in high magnification field are actually the major obstacle. Furthermore, a gigapixel scale WSI cannot be directly analyzed due to the immeasurable computational cost. How to design the weakly supervised learning methods to maximize the use of available WSI-level labels that can be readily obtained in clinical practice is quite appealing. To overcome these challenges, we present a weakly supervised approach in this article for fast and effective classification on the whole slide lung cancer images. Our method first takes advantage of a patch-based fully convolutional network (FCN) to retrieve discriminative blocks and provides representative deep features with high efficiency. Then, different context-aware block selection and feature aggregation strategies are explored to generate globally holistic WSI descriptor which is ultimately fed into a random forest (RF) classifier for the image-level prediction. To the best of our knowledge, this is the first study to exploit the potential of image-level labels along with some coarse annotations for weakly supervised learning. A large-scale lung cancer WSI dataset is constructed in this article for evaluation, which validates the effectiveness and feasibility of the proposed method. Extensive experiments demonstrate the superior performance of our method that surpasses the state-of-the-art approaches by a significant margin with an accuracy of 97.3%. In addition, our method also achieves the best performance on the public lung cancer WSIs dataset from The Cancer Genome Atlas (TCGA). We highlight that a small number of coarse annotations can contribute to further accuracy improvement. We believe that weakly supervised learning methods have great potential to assist pathologists in histology image diagnosis in the near future.
Collapse
|
127
|
Khouani A, El Habib Daho M, Mahmoudi SA, Chikh MA, Benzineb B. Automated recognition of white blood cells using deep learning. Biomed Eng Lett 2020; 10:359-367. [PMID: 32850177 PMCID: PMC7438424 DOI: 10.1007/s13534-020-00168-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 07/17/2020] [Accepted: 07/24/2020] [Indexed: 10/23/2022] Open
Abstract
The detection, counting, and precise segmentation of white blood cells in cytological images are vital steps in the effective diagnosis of several cancers. This paper introduces an efficient method for automatic recognition of white blood cells in peripheral blood and bone marrow images based on deep learning to alleviate tedious tasks for hematologists in clinical practice. First, input image pre-processing was proposed before applying a deep neural network model adapted to cells localization and segmentation. Then, model outputs were improved by using combined predictions and corrections. Finally, a new algorithm that uses the cooperation between model results and spatial information was implemented to improve the segmentation quality. To implement our model, python language, Tensorflow, and Keras libraries were used. The calculations were executed using NVIDIA GPU 1080, while the datasets used in our experiments came from patients in the Hemobiology service of Tlemcen Hospital (Algeria). The results were promising and showed the efficiency, power, and speed of the proposed method compared to the state-of-the-art methods. In addition to its accuracy of 95.73%, the proposed approach provided fast predictions (less than 1 s).
Collapse
|
128
|
Chung M, Lee J, Lee M, Lee J, Shin YG. Deeply self-supervised contour embedded neural network applied to liver segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 192:105447. [PMID: 32203792 DOI: 10.1016/j.cmpb.2020.105447] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Revised: 10/17/2019] [Accepted: 03/13/2020] [Indexed: 05/26/2023]
Abstract
OBJECTIVE Herein, a neural network-based liver segmentation algorithm is proposed, and its performance was evaluated using abdominal computed tomography (CT) images. METHODS A fully convolutional network was developed to overcome the volumetric image segmentation problem. To guide a neural network to accurately delineate a target liver object, the network was deeply supervised by applying the adaptive self-supervision scheme to derive the essential contour, which acted as a complement with the global shape. The discriminative contour, shape, and deep features were internally merged for the segmentation results. RESULTS AND CONCLUSION 160 abdominal CT images were used for training and validation. The quantitative evaluation of the proposed network was performed through an eight-fold cross-validation. The result showed that the method, which uses the contour feature, segmented the liver more accurately than the state-of-the-art with a 2.13% improvement in the dice score. SIGNIFICANCE In this study, a new framework was introduced to guide a neural network and learn complementary contour features. The proposed neural network demonstrates that the guided contour features can significantly improve the performance of the segmentation task.
Collapse
Affiliation(s)
- Minyoung Chung
- School of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 151-742, Korea.
| | - Jingyu Lee
- School of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 151-742, Korea.
| | - Minkyung Lee
- School of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 151-742, Korea.
| | - Jeongjin Lee
- School of Computer Science and Engineering, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul 156-743, Korea.
| | - Yeong-Gil Shin
- School of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 151-742, Korea.
| |
Collapse
|
129
|
Zhou Y, Yen GG, Yi Z. Evolutionary Compression of Deep Neural Networks for Biomedical Image Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2916-2929. [PMID: 31536016 DOI: 10.1109/tnnls.2019.2933879] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Biomedical image segmentation is lately dominated by deep neural networks (DNNs) due to their surpassing expert-level performance. However, the existing DNN models for biomedical image segmentation are generally highly parameterized, which severely impede their deployment on real-time platforms and portable devices. To tackle this difficulty, we propose an evolutionary compression method (ECDNN) to automatically discover efficient DNN architectures for biomedical image segmentation. Different from the existing studies, ECDNN can optimize network loss and number of parameters simultaneously during the evolution, and search for a set of Pareto-optimal solutions in a single run, which is useful for quantifying the tradeoff in satisfying different objectives, and flexible for compressing DNN when preference information is uncertain. In particular, a set of novel genetic operators is proposed for automatically identifying less important filters over the whole network. Moreover, a pruning operator is designed for eliminating convolutional filters from layers involved in feature map concatenation, which is commonly adopted in DNN architectures for capturing multi-level features from biomedical images. Experiments carried out on compressing DNN for retinal vessel and neuronal membrane segmentation tasks show that ECDNN can not only improve the performance without any retraining but also discover efficient network architectures that well maintain the performance. The superiority of the proposed method is further validated by comparison with the state-of-the-art methods.
Collapse
|
130
|
Deng S, Zhang X, Yan W, Chang EIC, Fan Y, Lai M, Xu Y. Deep learning in digital pathology image analysis: a survey. Front Med 2020; 14:470-487. [PMID: 32728875 DOI: 10.1007/s11684-020-0782-9] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 03/05/2020] [Indexed: 12/21/2022]
Abstract
Deep learning (DL) has achieved state-of-the-art performance in many digital pathology analysis tasks. Traditional methods usually require hand-crafted domain-specific features, and DL methods can learn representations without manually designed features. In terms of feature extraction, DL approaches are less labor intensive compared with conventional machine learning methods. In this paper, we comprehensively summarize recent DL-based image analysis studies in histopathology, including different tasks (e.g., classification, semantic segmentation, detection, and instance segmentation) and various applications (e.g., stain normalization, cell/gland/region structure analysis). DL methods can provide consistent and accurate outcomes. DL is a promising tool to assist pathologists in clinical diagnosis.
Collapse
Affiliation(s)
- Shujian Deng
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Xin Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Wen Yan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | | | - Yubo Fan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou, 310007, China
| | - Yan Xu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China.
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China.
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
- Microsoft Research Asia, Beijing, 100080, China.
| |
Collapse
|
131
|
Panayides AS, Amini A, Filipovic ND, Sharma A, Tsaftaris SA, Young A, Foran D, Do N, Golemati S, Kurc T, Huang K, Nikita KS, Veasey BP, Zervakis M, Saltz JH, Pattichis CS. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J Biomed Health Inform 2020; 24:1837-1857. [PMID: 32609615 PMCID: PMC8580417 DOI: 10.1109/jbhi.2020.2991043] [Citation(s) in RCA: 116] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.
Collapse
|
132
|
Aghababaie Z, Jamart K, Chan CHA, Amirapu S, Cheng LK, Paskaranandavadivel N, Avci R, Angeli TR. A V-Net Based Deep Learning Model for Segmentation and Classification of Histological Images of Gastric Ablation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1436-1439. [PMID: 33018260 DOI: 10.1109/embc44109.2020.9176220] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Gastric motility disorders are associated with bioelectrical abnormalities in the stomach. Recently, gastric ablation has emerged as a potential therapy to correct gastric dysrhythmias. However, the tissue-level effects of gastric ablation have not yet been evaluated. In this study, radiofrequency ablation was performed in vivo in pigs (n=7) at temperature-control mode (55-80°C, 5-10 s per point). The tissue was excised from the ablation site and routine H&E staining protocol was performed. In order to assess tissue damage, we developed an automated technique using a fully convolutional neural network to segment healthy tissue and ablated lesion sites within the muscle and mucosa layers of the stomach. The tissue segmentation achieved an overall Dice score accuracy of 96.18 ± 1.0 %, and Jacquard score of 92.77 ± 1.9 %, after 5-fold cross validation. The ablation lesion was detected with an overall Dice score of 94.16 ± 0.2 %. This method can be used in combination with high-resolution electrical mapping to define the optimal ablation dose for gastric ablation.Clinical Relevance-This work presents an automated method to quantify the ablation lesion in the stomach, which can be applied to determine optimal energy doses for gastric ablation, to enable clinical translation of this promising emerging therapy.
Collapse
|
133
|
Li R, Chen H, Gong G, Wang L. Bladder Wall Segmentation in MRI Images via Deep Learning and Anatomical Constraints. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1629-1632. [PMID: 33018307 DOI: 10.1109/embc44109.2020.9176112] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Segmenting the bladder wall from MRI images is of great significance for the early detection and auxiliary diagnosis of bladder tumors. However, automatic bladder wall segmentation is challenging due to weak boundaries and diverse shapes of bladders. Level-set-based methods have been applied to this task by utilizing the shape prior of bladders. However, it is a complex operation to adjust multiple parameters manually, and to select suitable hand-crafted features. In this paper, we propose an automatic method for the task based on deep learning and anatomical constraints. First, the autoencoder is used to model anatomical and semantic information of bladder walls by extracting their low dimensional feature representations from both MRI images and label images. Then as the constraint, such priors are incorporated into the modified residual network so as to generate more plausible segmentation results. Experiments on 1092 MRI images shows that the proposed method can generate more accurate and reliable results comparing with related works, with a dice similarity coefficient (DSC) of 85.48%.
Collapse
|
134
|
Hou L, Gupta R, Van Arnam JS, Zhang Y, Sivalenka K, Samaras D, Kurc TM, Saltz JH. Dataset of segmented nuclei in hematoxylin and eosin stained histopathology images of ten cancer types. Sci Data 2020; 7:185. [PMID: 32561748 PMCID: PMC7305328 DOI: 10.1038/s41597-020-0528-1] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Accepted: 05/14/2020] [Indexed: 12/03/2022] Open
Abstract
The distribution and appearance of nuclei are essential markers for the diagnosis and study of cancer. Despite the importance of nuclear morphology, there is a lack of large scale, accurate, publicly accessible nucleus segmentation data. To address this, we developed an analysis pipeline that segments nuclei in whole slide tissue images from multiple cancer types with a quality control process. We have generated nucleus segmentation results in 5,060 Whole Slide Tissue images from 10 cancer types in The Cancer Genome Atlas. One key component of our work is that we carried out a multi-level quality control process (WSI-level and image patch-level), to evaluate the quality of our segmentation results. The image patch-level quality control used manual segmentation ground truth data from 1,356 sampled image patches. The datasets we publish in this work consist of roughly 5 billion quality controlled nuclei from more than 5,060 TCGA WSIs from 10 different TCGA cancer types and 1,356 manually segmented TCGA image patches from the same 10 cancer types plus additional 4 cancer types.
Collapse
Affiliation(s)
- Le Hou
- Computer Science Department, 203C New Computer Science Building, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Rajarsi Gupta
- Biomedical Informatics Department, HSC L3-045, Stony Brook Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - John S Van Arnam
- Biomedical Informatics Department, HSC L3-045, Stony Brook Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yuwei Zhang
- Biomedical Informatics Department, HSC L3-045, Stony Brook Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Kaustubh Sivalenka
- Computer Science Department, 203C New Computer Science Building, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Dimitris Samaras
- Computer Science Department, 203C New Computer Science Building, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Tahsin M Kurc
- Biomedical Informatics Department, HSC L3-045, Stony Brook Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Joel H Saltz
- Biomedical Informatics Department, HSC L3-045, Stony Brook Medicine, Stony Brook University, Stony Brook, NY, 11794, USA.
| |
Collapse
|
135
|
Yan Z, Yang X, Cheng KT. Enabling a Single Deep Learning Model for Accurate Gland Instance Segmentation: A Shape-Aware Adversarial Learning Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2176-2189. [PMID: 31944936 DOI: 10.1109/tmi.2020.2966594] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Segmenting gland instances in histology images is highly challenging as it requires not only detecting glands from a complex background but also separating each individual gland instance with accurate boundary detection. However, due to the boundary uncertainty problem in manual annotations, pixel-to-pixel matching based loss functions are too restrictive for simultaneous gland detection and boundary detection. State-of-the-art approaches adopted multi-model schemes, resulting in unnecessarily high model complexity and difficulties in the training process. In this paper, we propose to use one single deep learning model for accurate gland instance segmentation. To address the boundary uncertainty problem, instead of pixel-to-pixel matching, we propose a segment-level shape similarity measure to calculate the curve similarity between each annotated boundary segment and the corresponding detected boundary segment within a fixed searching range. As the segment-level measure allows location variations within a fixed range for shape similarity calculation, it has better tolerance to boundary uncertainty and is more effective for boundary detection. Furthermore, by adjusting the radius of the searching range, the segment-level shape similarity measure is able to deal with different levels of boundary uncertainty. Therefore, in our framework, images of different scales are down-sampled and integrated to provide both global and local contextual information for training, which is helpful in segmenting gland instances of different sizes. To reduce the variations of multi-scale training images, by referring to adversarial domain adaptation, we propose a pseudo domain adaptation framework for feature alignment. By constructing loss functions based on the segment-level shape similarity measure, combining with the adversarial loss function, the proposed shape-aware adversarial learning framework enables one single deep learning model for gland instance segmentation. Experimental results on the 2015 MICCAI Gland Challenge dataset demonstrate that the proposed framework achieves state-of-the-art performance with one single deep learning model. As the boundary uncertainty problem widely exists in medical image segmentation, it is broadly applicable to other applications.
Collapse
|
136
|
Koyuncu CF, Gunesli GN, Cetin-Atalay R, Gunduz-Demir C. DeepDistance: A multi-task deep regression model for cell detection in inverted microscopy images. Med Image Anal 2020; 63:101720. [PMID: 32438298 DOI: 10.1016/j.media.2020.101720] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 02/28/2020] [Accepted: 05/04/2020] [Indexed: 11/25/2022]
Abstract
This paper presents a new deep regression model, which we call DeepDistance, for cell detection in images acquired with inverted microscopy. This model considers cell detection as a task of finding most probable locations that suggest cell centers in an image. It represents this main task with a regression task of learning an inner distance metric. However, different than the previously reported regression based methods, the DeepDistance model proposes to approach its learning as a multi-task regression problem where multiple tasks are learned by using shared feature representations. To this end, it defines a secondary metric, normalized outer distance, to represent a different aspect of the problem and proposes to define its learning as complementary to the main cell detection task. In order to learn these two complementary tasks more effectively, the DeepDistance model designs a fully convolutional network (FCN) with a shared encoder path and end-to-end trains this FCN to concurrently learn the tasks in parallel. For further performance improvement on the main task, this paper also presents an extended version of the DeepDistance model that includes an auxiliary classification task and learns it in parallel to the two regression tasks by also sharing feature representations with them. DeepDistance uses the inner distances estimated by these FCNs in a detection algorithm to locate individual cells in a given image. In addition to this detection algorithm, this paper also suggests a cell segmentation algorithm that employs the estimated maps to find cell boundaries. Our experiments on three different human cell lines reveal that the proposed multi-task learning models, the DeepDistance model and its extended version, successfully identify the locations of cell as well as delineate their boundaries, even for the cell line that was not used in training, and improve the results of its counterparts.
Collapse
Affiliation(s)
| | - Gozde Nur Gunesli
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey.
| | - Rengul Cetin-Atalay
- CanSyL,Graduate School of Informatics, Middle East Technical University, Ankara TR-06800, Turkey.
| | - Cigdem Gunduz-Demir
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara TR-06800, Turkey.
| |
Collapse
|
137
|
Microstructure Instance Segmentation from Aluminum Alloy Metallographic Image Using Different Loss Functions. Symmetry (Basel) 2020. [DOI: 10.3390/sym12040639] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Automatic segmentation of metallographic image is very important for the implementation of an automatic metallographic analysis system. In this paper, a novel instance segmentation framework of a metallographic image was implemented, which can assign each pixel to a physical instance of a microstructure. In this framework, we used the Mask R-CNN as the basic network to complete the learning and recognition of the latent feature of an aluminum alloy microstructure. Meanwhile, we implemented five different loss functions based on this framework and compared the influence of these loss functions on metallographic image segmentation performance. We carried out several experiments to verify the effectiveness of the proposed framework. In these experiments, we compared and analyzed six different evaluation metrics and provided constructive suggestions for the performance evaluation of metallographic image segmentation method. A large number of experimental results have shown that the proposed method can achieve the instance segmentation of an aluminum alloy metallographic image and the segmentation results are satisfactory.
Collapse
|
138
|
Ding H, Pan Z, Cen Q, Li Y, Chen S. Multi-scale fully convolutional network for gland segmentation using three-class classification. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.097] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
139
|
Yang Q, Xu Z, Liao C, Cai J, Huang Y, Chen H, Tao X, Huang Z, Chen J, Dong J, Zhu X. Epithelium segmentation and automated Gleason grading of prostate cancer via deep learning in label-free multiphoton microscopic images. JOURNAL OF BIOPHOTONICS 2020; 13:e201900203. [PMID: 31710780 DOI: 10.1002/jbio.201900203] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 11/10/2019] [Accepted: 11/10/2019] [Indexed: 06/10/2023]
Abstract
In the current clinical care practice, Gleason grading system is one of the most powerful prognostic predictors for prostate cancer (PCa). The grading system is based on the architectural pattern of cancerous epithelium in histological images. However, the standard procedure of histological examination often involves complicated tissue fixation and staining, which are time-consuming and may delay the diagnosis and surgery. In this study, label-free multiphoton microscopy (MPM) was used to acquire subcellular-resolution images of unstained prostate tissues. Then, a deep learning architecture (U-net) was introduced for epithelium segmentation of prostate tissues in MPM images. The obtained segmentation results were then merged with the original MPM images to train a classification network (AlexNet) for automated Gleason grading. The developed method achieved an overall pixel accuracy of 92.3% with a mean F1 score of 0.839 for epithelium segmentation. By merging the segmentation results with the MPM images, the accuracy of Gleason grading was improved from 72.42% to 81.13% in hold-out test set. Our results suggest that MPM in combination with deep learning holds the potential to be used as a fast and powerful clinical tool for PCa diagnosis.
Collapse
Affiliation(s)
- Qinqin Yang
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
- Department of Electronic Science, Xiamen University, Xiamen, China
| | - Zhexin Xu
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Chenxi Liao
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Jianyong Cai
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Ying Huang
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Hong Chen
- Department of Pathology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| | - Xuan Tao
- Department of Pathology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| | - Zheng Huang
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Jianxin Chen
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| | - Jiyang Dong
- Department of Electronic Science, Xiamen University, Xiamen, China
| | - Xiaoqin Zhu
- Institute of Laser and Optoelectronics Technology, Fujian Provincial Key Laboratory for Photonics Technology, Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Normal University, Fuzhou, China
| |
Collapse
|
140
|
Li J, Luo Y, Shi L, Zhang X, Li M, Zhang B, Wang D. Automatic fetal brain extraction from 2D in utero fetal MRI slices using deep neural network. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.032] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
141
|
Yin S, Peng Q, Li H, Zhang Z, You X, Fischer K, Furth SL, Tasian GE, Fan Y. Automatic kidney segmentation in ultrasound images using subsequent boundary distance regression and pixelwise classification networks. Med Image Anal 2020; 60:101602. [PMID: 31760193 PMCID: PMC6980346 DOI: 10.1016/j.media.2019.101602] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Revised: 07/22/2019] [Accepted: 11/07/2019] [Indexed: 12/28/2022]
Abstract
It remains challenging to automatically segment kidneys in clinical ultrasound (US) images due to the kidneys' varied shapes and image intensity distributions, although semi-automatic methods have achieved promising performance. In this study, we propose subsequent boundary distance regression and pixel classification networks to segment the kidneys automatically. Particularly, we first use deep neural networks pre-trained for classification of natural images to extract high-level image features from US images. These features are used as input to learn kidney boundary distance maps using a boundary distance regression network and the predicted boundary distance maps are classified as kidney pixels or non-kidney pixels using a pixelwise classification network in an end-to-end learning fashion. We also adopted a data-augmentation method based on kidney shape registration to generate enriched training data from a small number of US images with manually segmented kidney labels. Experimental results have demonstrated that our method could automatically segment the kidney with promising performance, significantly better than deep learning-based pixel classification networks.
Collapse
Affiliation(s)
- Shi Yin
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Qinmu Peng
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China; Shenzhen Huazhong University of Science and Technology Research Institute, China.
| | - Hongming Li
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Zhengqiang Zhang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Xinge You
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China; Shenzhen Huazhong University of Science and Technology Research Institute, China
| | - Katherine Fischer
- Department of Surgery, Division of Pediatric Urology, The Children's Hospital of Philadelphia, Philadelphia, PA 19104, United States; Center for Pediatric Clinical Effectiveness, The Children's Hospital of Philadelphia, Philadelphia, PA 19104, United States
| | - Susan L Furth
- Department of Pediatrics, Division of Pediatric Nephrology, The Children's Hospital of Philadelphia, Philadelphia, PA 19104, United States
| | - Gregory E Tasian
- Department of Surgery, Division of Pediatric Urology, The Children's Hospital of Philadelphia, Philadelphia, PA 19104, United States; Center for Pediatric Clinical Effectiveness, The Children's Hospital of Philadelphia, Philadelphia, PA 19104, United States; Department of Biostatistics, Epidemiology, and Informatics, The University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, United States.
| |
Collapse
|
142
|
Xie L, Qi J, Pan L, Wali S. Integrating deep convolutional neural networks with marker-controlled watershed for overlapping nuclei segmentation in histopathology images. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.09.083] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
143
|
A new convolutional neural network model for peripapillary atrophy area segmentation from retinal fundus images. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2019.105890] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
144
|
Belciug S. Pathologist at work. Artif Intell Cancer 2020. [DOI: 10.1016/b978-0-12-820201-2.00003-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
|
145
|
Wang P, Bai X. Thermal Infrared Pedestrian Segmentation Based on Conditional GAN. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:6007-6021. [PMID: 31265395 DOI: 10.1109/tip.2019.2924171] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A novel thermal infrared pedestrian segmentation algorithm based on conditional generative adversarial network (IPS-cGAN) is proposed for intelligent vehicular applications. The convolution backbone architecture of the generator is based on the improved U-Net with residual blocks for well utilizing regional semantic information. Moreover, cross entropy loss for segmentation is introduced as the condition for the generator. SandwichNet, a novel convolutional network with symmetrical input, is proposed as the discriminator for real-fake segmented images. Based on the c-GAN framework, good segmentation performance could be achieved for thermal infrared pedestrians. Compared to some supervised and unsupervised segmentation algorithms, the proposed algorithm achieves higher accuracy with better robustness, especially for complex scenes.
Collapse
|
146
|
Niemann A, Weigand S, Hoffmann T, Skalej M, Tulamo R, Preim B, Saalfeld S. Interactive exploration of a 3D intracranial aneurysm wall model extracted from histologic slices. Int J Comput Assist Radiol Surg 2019; 15:99-107. [PMID: 31705419 DOI: 10.1007/s11548-019-02083-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Accepted: 10/18/2019] [Indexed: 01/23/2023]
Abstract
PURPOSE Currently no detailed in vivo imaging of the intracranial vessel wall exists. Ex vivo histologic images can provide information about the intracranial aneurysm (IA) wall composition that is useful for the understanding of IA development and rupture risk. For a 3D analysis, the 2D histologic slices must be incorporated in a 3D model which can be used for a spatial evaluation of the IA's morphology, including analysis of the IA neck. METHODS In 2D images of histologic slices, different wall layers were manually segmented and a 3D model was generated. The nuclei were automatically detected and classified as round or elongated, and a neural network-based wall type classification was performed. The information was combined in a software prototype visualization providing a unique view of the wall characteristics of an IA and allowing interactive exploration. Furthermore, the heterogeneity (as variance of the wall thickness) of the wall was evaluated. RESULT A 3D model correctly representing the histologic data was reconstructed. The visualization integrating wall information was perceived as useful by a medical expert. The classification produces a plausible result. CONCLUSION The usage of histologic images allows to create a 3D model with new information about the aneurysm wall. The model provides information about the wall thickness, its heterogeneity and, when performed on cadaveric samples, includes information about the transition between IA neck and sac.
Collapse
Affiliation(s)
- Annika Niemann
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany.
| | - Simon Weigand
- Ludwig-Maximilians-Universität Klinikum, Munich, Germany
| | | | | | - Riikka Tulamo
- Helsinki University Hospital, University of Helsinki, Helsinki, Finland
| | - Bernhard Preim
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany
| | - Sylvia Saalfeld
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany.,Research Campus STIMULATE, Magdeburg, Germany
| |
Collapse
|
147
|
Mohagheghi S, Foruzan AH. Incorporating prior shape knowledge via data-driven loss model to improve 3D liver segmentation in deep CNNs. Int J Comput Assist Radiol Surg 2019; 15:249-257. [PMID: 31686380 DOI: 10.1007/s11548-019-02085-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2019] [Accepted: 10/24/2019] [Indexed: 11/28/2022]
Abstract
PURPOSE Convolutional neural networks (CNNs) have obtained enormous success in liver segmentation. However, there are several challenges, including low-contrast images, and large variations in the shape, and appearance of the liver. Incorporating prior knowledge in deep CNN models improves their performance and generalization. METHODS A convolutional denoising auto-encoder is utilized to learn global information about 3D liver shapes in a low-dimensional latent space. Then, the deep data-driven knowledge is used to define a loss function and combine it with the Dice loss in the main segmentation model. The resultant hybrid model would be forced to learn the global shape information as prior knowledge, while it tries to produce accurate results and increase the Dice score. RESULTS The proposed training strategy improved the performance of the 3D U-Net model and reached the Dice score of 97.62% on the Sliver07-I liver dataset, which is competitive to the state-of-the-art automatic segmentation methods. The proposed algorithm enhanced the generalization and robustness of the hybrid model and outperformed the 3D U-Net model in the prediction of unseen images. CONCLUSIONS The results indicate that the incorporation of prior shape knowledge enhances liver segmentation tasks in deep CNN models. The proposed method improves the generalization and robustness of the hybrid model due to the abstract features provided by the data-driven loss model.
Collapse
Affiliation(s)
- Saeed Mohagheghi
- Department of Biomedical Engineering, Engineering Faculty, Shahed University, Tehran, Iran
| | - Amir Hossein Foruzan
- Department of Biomedical Engineering, Engineering Faculty, Shahed University, Tehran, Iran.
| |
Collapse
|
148
|
FABnet: feature attention-based network for simultaneous segmentation of microvessels and nerves in routine histology images of oral cancer. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04516-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
149
|
Rathore S, Iftikhar MA, Chaddad A, Niazi T, Karasic T, Bilello M. Segmentation and Grade Prediction of Colon Cancer Digital Pathology Images Across Multiple Institutions. Cancers (Basel) 2019; 11:cancers11111700. [PMID: 31683818 PMCID: PMC6896042 DOI: 10.3390/cancers11111700] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 10/03/2019] [Accepted: 10/17/2019] [Indexed: 12/11/2022] Open
Abstract
Distinguishing benign from malignant disease is a primary challenge for colon histopathologists. Current clinical methods rely on qualitative visual analysis of features such as glandular architecture and size that exist on a continuum from benign to malignant. Consequently, discordance between histopathologists is common. To provide more reliable analysis of colon specimens, we propose an end-to-end computational pathology pipeline that encompasses gland segmentation, cancer detection, and then further breaking down the malignant samples into different cancer grades. We propose a multi-step gland segmentation method, which models tissue components as ellipsoids. For cancer detection/grading, we encode cellular morphology, spatial architectural patterns of glands, and texture by extracting multi-scale features: (i) Gland-based: extracted from individual glands, (ii) local-patch-based: computed from randomly-selected image patches, and (iii) image-based: extracted from images, and employ a hierarchical ensemble-classification method. Using two datasets (Rawalpindi Medical College (RMC), n = 174 and gland segmentation (GlaS), n = 165) with three cancer grades, our method reliably delineated gland regions (RMC = 87.5%, GlaS = 88.4%), detected the presence of malignancy (RMC = 97.6%, GlaS = 98.3%), and predicted tumor grade (RMC = 98.6%, GlaS = 98.6%). Training the model using one dataset and testing it on the other showed strong concordance in cancer detection (Train RMC – Test GlaS = 94.5%, Train GlaS – Test RMC = 93.7%) and grading (Train RMC – Test GlaS = 95%, Train GlaS – Test RMC = 95%) suggesting that the model will be applicable across institutions. With further prospective validation, the techniques demonstrated here may provide a reproducible and easily accessible method to standardize analysis of colon cancer specimens.
Collapse
Affiliation(s)
- Saima Rathore
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA 19104, USA.
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Muhammad Aksam Iftikhar
- Department of Computer Science, COMSATS University Islamabad, Lahore Campus, Lahore 54000, Pakistan.
| | - Ahmad Chaddad
- Division of Radiation Oncology, Department of Oncology, McGill University, Montreal, QC H3S 1Y9, Canada.
| | - Tamim Niazi
- Division of Radiation Oncology, Department of Oncology, McGill University, Montreal, QC H3S 1Y9, Canada.
| | - Thomas Karasic
- Department of Medicine, Division of Hematology/Oncology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA 19104, USA.
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA.
| |
Collapse
|
150
|
Pontalba JT, Gwynne-Timothy T, David E, Jakate K, Androutsos D, Khademi A. Assessing the Impact of Color Normalization in Convolutional Neural Network-Based Nuclei Segmentation Frameworks. Front Bioeng Biotechnol 2019; 7:300. [PMID: 31737619 PMCID: PMC6838039 DOI: 10.3389/fbioe.2019.00300] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Accepted: 10/15/2019] [Indexed: 02/03/2023] Open
Abstract
Image analysis tools for cancer, such as automatic nuclei segmentation, are impacted by the inherent variation contained in pathology image data. Convolutional neural networks (CNN), demonstrate success in generalizing to variable data, illustrating great potential as a solution to the problem of data variability. In some CNN-based segmentation works for digital pathology, authors apply color normalization (CN) to reduce color variability of data as a preprocessing step prior to prediction, while others do not. Both approaches achieve reasonable performance and yet, the reasoning for utilizing this step has not been justified. It is therefore important to evaluate the necessity and impact of CN for deep learning frameworks, and its effect on downstream processes. In this paper, we evaluate the effect of popular CN methods on CNN-based nuclei segmentation frameworks.
Collapse
Affiliation(s)
| | | | - Ephraim David
- Image Analysis in Medicine Lab (IAMLAB), Ryerson University, Toronto, ON, Canada
| | | | - Dimitrios Androutsos
- Image Analysis in Medicine Lab (IAMLAB), Ryerson University, Toronto, ON, Canada
| | - April Khademi
- Image Analysis in Medicine Lab (IAMLAB), Ryerson University, Toronto, ON, Canada
| |
Collapse
|