1
|
Mahbod A, Dorffner G, Ellinger I, Woitek R, Hatamikia S. Improving generalization capability of deep learning-based nuclei instance segmentation by non-deterministic train time and deterministic test time stain normalization. Comput Struct Biotechnol J 2024; 23:669-678. [PMID: 38292472 PMCID: PMC10825317 DOI: 10.1016/j.csbj.2023.12.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 12/26/2023] [Accepted: 12/26/2023] [Indexed: 02/01/2024] Open
Abstract
With the advent of digital pathology and microscopic systems that can scan and save whole slide histological images automatically, there is a growing trend to use computerized methods to analyze acquired images. Among different histopathological image analysis tasks, nuclei instance segmentation plays a fundamental role in a wide range of clinical and research applications. While many semi- and fully-automatic computerized methods have been proposed for nuclei instance segmentation, deep learning (DL)-based approaches have been shown to deliver the best performances. However, the performance of such approaches usually degrades when tested on unseen datasets. In this work, we propose a novel method to improve the generalization capability of a DL-based automatic segmentation approach. Besides utilizing one of the state-of-the-art DL-based models as a baseline, our method incorporates non-deterministic train time and deterministic test time stain normalization, and ensembling to boost the segmentation performance. We trained the model with one single training set and evaluated its segmentation performance on seven test datasets. Our results show that the proposed method provides up to 4.9%, 5.4%, and 5.9% better average performance in segmenting nuclei based on Dice score, aggregated Jaccard index, and panoptic quality score, respectively, compared to the baseline segmentation model.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria
| |
Collapse
|
2
|
Pang C, Lu X, Liu X, Zhang R, Lyu L. IIAM: Intra and Inter Attention With Mutual Consistency Learning Network for Medical Image Segmentation. IEEE J Biomed Health Inform 2024; 28:5971-5983. [PMID: 38985557 DOI: 10.1109/jbhi.2024.3426074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024]
Abstract
Medical image segmentation provides a reliable basis for diagnosis analysis and disease treatment by capturing the global and local features of the target region. To learn global features, convolutional neural networks are replaced with pure transformers, or transformer layers are stacked at the deepest layers of convolutional neural networks. Nevertheless, they are deficient in exploring local-global cues at each scale and the interaction among consensual regions in multiple scales, hindering the learning about the changes in size, shape, and position of target objects. To cope with these defects, we propose a novel Intra and Inter Attention with Mutual Consistency Learning Network (IIAM). Concretely, we design an intra attention module to aggregate the CNN-based local features and transformer-based global information on each scale. In addition, to capture the interaction among consensual regions in multiple scales, we devise an inter attention module to explore the cross-scale dependency of the object and its surroundings. Moreover, to reduce the impact of blurred regions in medical images on the final segmentation results, we introduce multiple decoders to estimate the model uncertainty, where we adopt a mutual consistency learning strategy to minimize the output discrepancy during the end-to-end training and weight the outputs of the three decoders as the final segmentation result. Extensive experiments on three benchmark datasets verify the efficacy of our method and demonstrate superior performance of our model to state-of-the-art techniques.
Collapse
|
3
|
Li J, Dong P, Wang X, Zhang J, Zhao M, Shen H, Cai L, He J, Han M, Miao J, Liu H, Yang W, Han X, Liu Y. Artificial intelligence enhances whole-slide interpretation of PD-L1 CPS in triple-negative breast cancer: A multi-institutional ring study. Histopathology 2024; 85:451-467. [PMID: 38747491 DOI: 10.1111/his.15205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 03/11/2024] [Accepted: 04/21/2024] [Indexed: 08/09/2024]
Abstract
BACKGROUND AND AIMS Evaluation of the programmed cell death ligand-1 (PD-L1) combined positive score (CPS) is vital to predict the efficacy of the immunotherapy in triple-negative breast cancer (TNBC), but pathologists show substantial variability in the consistency and accuracy of the interpretation. It is of great importance to establish an objective and effective method which is highly repeatable. METHODS We proposed a model in a deep learning-based framework, which at the patch level incorporated cell analysis and tissue region analysis, followed by the whole-slide level fusion of patch results. Three rounds of ring studies (RSs) were conducted. Twenty-one pathologists of different levels from four institutions evaluated the PD-L1 CPS in TNBC specimens as continuous scores by visual assessment and our artificial intelligence (AI)-assisted method. RESULTS In the visual assessment, the interpretation results of PD-L1 (Dako 22C3) CPS by different levels of pathologists have significant differences and showed weak consistency. Using AI-assisted interpretation, there were no significant differences between all pathologists (P = 0.43), and the intraclass correlation coefficient (ICC) value was increased from 0.618 [95% confidence interval (CI) = 0.524-0.719] to 0.931 (95% CI = 0.902-0.955). The accuracy of interpretation result is further improved to 0.919 (95% CI = 0.886-0.947). Acceptance of AI results by junior pathologists was the highest among all levels, and 80% of the AI results were accepted overall. CONCLUSION With the help of the AI-assisted diagnostic method, different levels of pathologists achieved excellent consistency and repeatability in the interpretation of PD-L1 (Dako 22C3) CPS. Our AI-assisted diagnostic approach was proved to strengthen the consistency and repeatability in clinical practice.
Collapse
Affiliation(s)
- Jinze Li
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Pei Dong
- AI Lab, Tencent, Shenzhen, Guangdong, China
| | - Xinran Wang
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Jun Zhang
- AI Lab, Tencent, Shenzhen, Guangdong, China
| | - Meng Zhao
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | | | - Lijing Cai
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Jiankun He
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Mengxue Han
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Jiaxian Miao
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Hongbo Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Wei Yang
- AI Lab, Tencent, Shenzhen, Guangdong, China
| | - Xiao Han
- AI Lab, Tencent, Shenzhen, Guangdong, China
| | - Yueping Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| |
Collapse
|
4
|
Chen X, Liu Q, Deng HH, Kuang T, Lin HHY, Xiao D, Gateno J, Xia JJ, Yap PT. Improving Image Segmentation with Contextual and Structural Similarity. PATTERN RECOGNITION 2024; 152:110489. [PMID: 38645435 PMCID: PMC11027435 DOI: 10.1016/j.patcog.2024.110489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Deep learning models for medical image segmentation are usually trained with voxel-wise losses, e.g., cross-entropy loss, focusing on unary supervision without considering inter-voxel relationships. This oversight potentially leads to semantically inconsistent predictions. Here, we propose a contextual similarity loss (CSL) and a structural similarity loss (SSL) to explicitly and efficiently incorporate inter-voxel relationships for improved performance. The CSL promotes consistency in predicted object categories for each image sub-region compared to ground truth. The SSL enforces compatibility between the predictions of voxel pairs by computing pair-wise distances between them, ensuring that voxels of the same class are close together whereas those from different classes are separated by a wide margin in the distribution space. The effectiveness of the CSL and SSL is evaluated using a clinical cone-beam computed tomography (CBCT) dataset of patients with various craniomaxillofacial (CMF) deformities and a public pancreas dataset. Experimental results show that the CSL and SSL outperform state-of-the-art regional loss functions in preserving segmentation semantics.
Collapse
Affiliation(s)
- Xiaoyang Chen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, 27599, NC, USA
| | - Qin Liu
- Department of Computer Science, University of North Carolina, Chapel Hill, 27599, NC, USA
| | - Hannah H. Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
| | - Henry Hung-Ying Lin
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, 27599, NC, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, 10065, NY, USA
| | - James J. Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, 77030, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, 10065, NY, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, 27599, NC, USA
| |
Collapse
|
5
|
Lin Y, Wang Z, Zhang D, Cheng KT, Chen H. BoNuS: Boundary Mining for Nuclei Segmentation With Partial Point Labels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2137-2147. [PMID: 38231818 DOI: 10.1109/tmi.2024.3355068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Nuclei segmentation is a fundamental prerequisite in the digital pathology workflow. The development of automated methods for nuclei segmentation enables quantitative analysis of the wide existence and large variances in nuclei morphometry in histopathology images. However, manual annotation of tens of thousands of nuclei is tedious and time-consuming, which requires significant amount of human effort and domain-specific expertise. To alleviate this problem, in this paper, we propose a weakly-supervised nuclei segmentation method that only requires partial point labels of nuclei. Specifically, we propose a novel boundary mining framework for nuclei segmentation, named BoNuS, which simultaneously learns nuclei interior and boundary information from the point labels. To achieve this goal, we propose a novel boundary mining loss, which guides the model to learn the boundary information by exploring the pairwise pixel affinity in a multiple-instance learning manner. Then, we consider a more challenging problem, i.e., partial point label, where we propose a nuclei detection module with curriculum learning to detect the missing nuclei with prior morphological knowledge. The proposed method is validated on three public datasets, MoNuSeg, CPM, and CoNIC datasets. Experimental results demonstrate the superior performance of our method to the state-of-the-art weakly-supervised nuclei segmentation methods. Code: https://github.com/hust-linyi/bonus.
Collapse
|
6
|
Zhang S, Yuan Z, Zhou X, Wang H, Chen B, Wang Y. VENet: Variational energy network for gland segmentation of pathological images and early gastric cancer diagnosis of whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108178. [PMID: 38652995 DOI: 10.1016/j.cmpb.2024.108178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 04/08/2024] [Accepted: 04/13/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND AND OBJECTIVE Gland segmentation of pathological images is an essential but challenging step for adenocarcinoma diagnosis. Although deep learning methods have recently made tremendous progress in gland segmentation, they have not given satisfactory boundary and region segmentation results of adjacent glands. These glands usually have a large difference in glandular appearance, and the statistical distribution between the training and test sets in deep learning is inconsistent. These problems make networks not generalize well in the test dataset, bringing difficulties to gland segmentation and early cancer diagnosis. METHODS To address these problems, we propose a Variational Energy Network named VENet with a traditional variational energy Lv loss for gland segmentation of pathological images and early gastric cancer detection in whole slide images (WSIs). It effectively integrates the variational mathematical model and the data-adaptability of deep learning methods to balance boundary and region segmentation. Furthermore, it can effectively segment and classify glands in large-size WSIs with reliable nucleus width and nucleus-to-cytoplasm ratio features. RESULTS The VENet was evaluated on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset, the Colorectal Adenocarcinoma Glands (CRAG) dataset, and the self-collected Nanfang Hospital dataset. Compared with state-of-the-art methods, our method achieved excellent performance for GlaS Test A (object dice 0.9562, object F1 0.9271, object Hausdorff distance 73.13), GlaS Test B (object dice 94.95, object F1 95.60, object Hausdorff distance 59.63), and CRAG (object dice 95.08, object F1 92.94, object Hausdorff distance 28.01). For the Nanfang Hospital dataset, our method achieved a kappa of 0.78, an accuracy of 0.9, a sensitivity of 0.98, and a specificity of 0.80 on the classification task of test 69 WSIs. CONCLUSIONS The experimental results show that the proposed model accurately predicts boundaries and outperforms state-of-the-art methods. It can be applied to the early diagnosis of gastric cancer by detecting regions of high-grade gastric intraepithelial neoplasia in WSI, which can assist pathologists in analyzing large WSI and making accurate diagnostic decisions.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Academy of Military Sciences of the People's Liberation Army, Beijing, China.
| | - Xianchen Zhou
- Department of Mathematics, National University of Defense Technology, Changsha, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
7
|
Alahmari SS, Goldgof D, Hall LO, Mouton PR. A Review of Nuclei Detection and Segmentation on Microscopy Images Using Deep Learning With Applications to Unbiased Stereology Counting. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7458-7477. [PMID: 36327184 DOI: 10.1109/tnnls.2022.3213407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.
Collapse
|
8
|
Lin H, Falahkheirkhah K, Kindratenko V, Bhargava R. INSTRAS: INfrared Spectroscopic imaging-based TRAnsformers for medical image Segmentation. MACHINE LEARNING WITH APPLICATIONS 2024; 16:100549. [PMID: 39036499 PMCID: PMC11258863 DOI: 10.1016/j.mlwa.2024.100549] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024] Open
Abstract
Infrared (IR) spectroscopic imaging is of potentially wide use in medical imaging applications due to its ability to capture both chemical and spatial information. This complexity of the data both necessitates using machine intelligence as well as presents an opportunity to harness a high-dimensionality data set that offers far more information than today's manually-interpreted images. While convolutional neural networks (CNNs), including the well-known U-Net model, have demonstrated impressive performance in image segmentation, the inherent locality of convolution limits the effectiveness of these models for encoding IR data, resulting in suboptimal performance. In this work, we propose an INfrared Spectroscopic imaging-based TRAnsformers for medical image Segmentation (INSTRAS). This novel model leverages the strength of the transformer encoders to segment IR breast images effectively. Incorporating skip-connection and transformer encoders, INSTRAS overcomes the issue of pure convolution models, such as the difficulty of capturing long-range dependencies. To evaluate the performance of our model and existing convolutional models, we conducted training on various encoder-decoder models using a breast dataset of IR images. INSTRAS, utilizing 9 spectral bands for segmentation, achieved a remarkable AUC score of 0.9788, underscoring its superior capabilities compared to purely convolutional models. These experimental results attest to INSTRAS's advanced and improved segmentation abilities for IR imaging.
Collapse
Affiliation(s)
- Hangzheng Lin
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, IL, United States
| | | | - Volodymyr Kindratenko
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, IL, United States
- Center for Artificial Intelligence Innovation, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, IL, United States
| | - Rohit Bhargava
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, IL, United States
- Beckman Institute, University of Illinois at Urbana-Champaign, IL, United States
- Departments of Bioengineering, Mechanical Science and Engineering and Cancer Center at Illinois, University of Illinois at Urbana-Champaign, IL, United States
| |
Collapse
|
9
|
Sadikine A, Badic B, Tasu JP, Noblet V, Ballet P, Visvikis D, Conze PH. Improving abdominal image segmentation with overcomplete shape priors. Comput Med Imaging Graph 2024; 113:102356. [PMID: 38340573 DOI: 10.1016/j.compmedimag.2024.102356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/11/2023] [Accepted: 02/06/2024] [Indexed: 02/12/2024]
Abstract
The extraction of abdominal structures using deep learning has recently experienced a widespread interest in medical image analysis. Automatic abdominal organ and vessel segmentation is highly desirable to guide clinicians in computer-assisted diagnosis, therapy, or surgical planning. Despite a good ability to extract large organs, the capacity of U-Net inspired architectures to automatically delineate smaller structures remains a major issue, especially given the increase in receptive field size as we go deeper into the network. To deal with various abdominal structure sizes while exploiting efficient geometric constraints, we present a novel approach that integrates into deep segmentation shape priors from a semi-overcomplete convolutional auto-encoder (S-OCAE) embedding. Compared to standard convolutional auto-encoders (CAE), it exploits an over-complete branch that projects data onto higher dimensions to better characterize anatomical structures with a small spatial extent. Experiments on abdominal organs and vessel delineation performed on various publicly available datasets highlight the effectiveness of our method compared to state-of-the-art, including U-Net trained without and with shape priors from a traditional CAE. Exploiting a semi-overcomplete convolutional auto-encoder embedding as shape priors improves the ability of deep segmentation models to provide realistic and accurate abdominal structure contours.
Collapse
Affiliation(s)
- Amine Sadikine
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University of Western Brittany, Brest, 29200, France
| | - Bogdan Badic
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University Hospital of Brest, Brest, 29200, France
| | - Jean-Pierre Tasu
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University Hospital of Poitiers, Poitiers, 86000, France
| | | | - Pascal Ballet
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University of Western Brittany, Brest, 29200, France
| | | | - Pierre-Henri Conze
- LaTIM UMR 1101, Inserm, Brest, 29200, France; IMT Atlantique, Brest, 29200, France.
| |
Collapse
|
10
|
Luna M, Chikontwe P, Nam S, Park SH. Attention guided multi-scale cluster refinement with extended field of view for amodal nuclei segmentation. Comput Biol Med 2024; 170:108015. [PMID: 38266467 DOI: 10.1016/j.compbiomed.2024.108015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/04/2024] [Accepted: 01/19/2024] [Indexed: 01/26/2024]
Abstract
Nuclei segmentation plays a crucial role in disease understanding and diagnosis. In whole slide images, cell nuclei often appear overlapping and densely packed with ambiguous boundaries due to the underlying 3D structure of histopathology samples. Instance segmentation via deep neural networks with object clustering is able to detect individual segments in crowded nuclei but suffers from a limited field of view, and does not support amodal segmentation. In this work, we introduce a dense feature pyramid network with a feature mixing module to increase the field of view of the segmentation model while keeping pixel-level details. We also improve the model output quality by adding a multi-scale self-attention guided refinement module that sequentially adjusts predictions as resolution increases. Finally, we enable clusters to share pixels by separating the instance clustering objective function from other pixel-related tasks, and introduce supervision to occluded areas to guide the learning process. For evaluation of amodal nuclear segmentation, we also update prior metrics used in common modal segmentation to allow the evaluation of overlapping masks and mitigate over-penalization issues via a novel unique matching algorithm. Our experiments demonstrate consistent performance across multiple datasets with significantly improved segmentation quality.
Collapse
Affiliation(s)
- Miguel Luna
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea
| | - Philip Chikontwe
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea
| | - Siwoo Nam
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea; AI Graduate School, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea.
| |
Collapse
|
11
|
Chen J, Yang G, Liu A, Chen X, Liu J. SFE-Net: Spatial-Frequency Enhancement Network for robust nuclei segmentation in histopathology images. Comput Biol Med 2024; 171:108131. [PMID: 38447498 DOI: 10.1016/j.compbiomed.2024.108131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 01/19/2024] [Accepted: 02/04/2024] [Indexed: 03/08/2024]
Abstract
Morphological features of individual nuclei serve as a dependable foundation for pathologists in making accurate diagnoses. Existing methods that rely on spatial information for feature extraction have achieved commendable results in nuclei segmentation tasks. However, these approaches are not sufficient to extract edge information of nuclei with small sizes and blurred outlines. Moreover, the lack of attention to the interior of the nuclei leads to significant internal inconsistencies. To address these challenges, we introduce a novel Spatial-Frequency Enhancement Network (SFE-Net) to incorporate spatial-frequency features and promote intra-nuclei consistency for robust nuclei segmentation. Specifically, SFE-Net incorporates a distinctive Spatial-Frequency Feature Extraction module and a Spatial-Guided Feature Enhancement module, which are designed to preserve spatial-frequency information and enhance feature representation respectively, to achieve comprehensive extraction of edge information. Furthermore, we introduce the Label-Guided Distillation method, which utilizes semantic features to guide the segmentation network in strengthening boundary constraints and learning the intra-nuclei consistency of individual nuclei, to improve the robustness of nuclei segmentation. Extensive experiments on three publicly available histopathology image datasets (MoNuSeg, TNBC and CryoNuSeg) demonstrate the superiority of our proposed method, which achieves 79.23%, 81.96% and 73.26% Aggregated Jaccard Index, respectively. The proposed model is available at https://github.com/jinshachen/SFE-Net.
Collapse
Affiliation(s)
- Jinsha Chen
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Gang Yang
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Aiping Liu
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Xun Chen
- School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Ji Liu
- School of Biomedical Engineering, Division of Life Sciences and Medicine, School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China.
| |
Collapse
|
12
|
Tan M, Cui Z, Zhong T, Fang Y, Zhang Y, Shen D. A progressive framework for tooth and substructure segmentation from cone-beam CT images. Comput Biol Med 2024; 169:107839. [PMID: 38150887 DOI: 10.1016/j.compbiomed.2023.107839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 11/13/2023] [Accepted: 12/07/2023] [Indexed: 12/29/2023]
Abstract
BACKGROUND Accurate segmentation of individual tooth and their substructures including enamel, pulp, and dentin from cone-beam computed tomography (CBCT) images is essential for dental diagnosis and treatment planning in digital dentistry. Existing methods for tooth segmentation based on CBCT images have achieved substantial progress; however, techniques for further segmentation into substructures are yet to be developed. PURPOSE We aim to propose a novel three-stage progressive deep-learning-based framework for automatically segmenting 3D tooth from CBCT images, focusing on finer substructures, i.e., enamel, pulp, and dentin. METHODS In this paper, we first detect each tooth using its centroid by a clustering scheme, which efficiently determines each tooth detection by applying learned displacement vectors from the foreground tooth region. Next, guided by the detected centroid, each tooth proposal, combined with the corresponding tooth map, is processed through our tooth segmentation network. We also present an attention-based hybrid feature fusion mechanism, which provides intricate details of the tooth boundary while maintaining the global tooth shape, thereby enhancing the segmentation process. Additionally, we utilize the skeleton of the tooth as a guide for subsequent substructure segmentation. RESULTS Our algorithm is extensively evaluated on a collected dataset of 314 patients, and the extensive comparison and ablation studies demonstrate superior segmentation results of our approach. CONCLUSIONS Our proposed method can automatically segment tooth and finer substructures from CBCT images, underlining its potential applicability for clinical diagnosis and surgical treatment.
Collapse
Affiliation(s)
- Minhui Tan
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China
| | - Zhiming Cui
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China.
| | - Tao Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Yu Fang
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
| | - Dinggang Shen
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, 200230, China; Shanghai Clinical Research and Trial Center, Shanghai, 201210, China.
| |
Collapse
|
13
|
Shui Y, Wang Z, Liu B, Wang W, Fu S, Li Y. A three-path network with multi-scale selective feature fusion, edge-inspiring and edge-guiding for liver tumor segmentation. Comput Biol Med 2024; 168:107841. [PMID: 38081117 DOI: 10.1016/j.compbiomed.2023.107841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 11/04/2023] [Accepted: 12/07/2023] [Indexed: 01/10/2024]
Abstract
Automatic liver tumor segmentation is one of the most important tasks in computer-aided diagnosis and treatment. Deep learning techniques have gained increasing popularity for medical image segmentation in recent years. However, due to the various shapes, sizes, and obscure boundaries of tumors, it is still difficult to automatically extract tumor regions from CT images. Based on the complementarity of edge detection and region segmentation, a three-path structure with multi-scale selective feature fusion (MSFF) module, multi-channel feature fusion (MFF) module, edge-inspiring (EI) module, and edge-guiding (EG) module is proposed in this paper. The MSFF module includes the process of generation, fusion, and selection of multi-scale features, which can adaptively correct the response weights in multiple branches to filter redundant information. The MFF module integrates richer hierarchical features to capture targets at different scales. The EI module aggregates high-level semantic information at different levels to obtain fine edge semantics, which is injected into the EG module for representation learning of segmentation features. Experiments on the LiTs2017 dataset show that our proposed method achieves a Dice index of 85.55% and a Jaccard index of 81.11%, which are higher than what can be obtained by the current state-of-the-art methods. Cross-dataset validation experiments conducted on 3Dircadb and Clinical datasets show the generalization and robustness of the proposed method by achieving dice indices of 80.14% and 81.68%, respectively.
Collapse
Affiliation(s)
- Yuanyuan Shui
- School of Mathematics, Shandong University, Jinan, 250100, China
| | - Zhendong Wang
- School of Mathematics, Shandong University, Jinan, 250100, China
| | - Bin Liu
- Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China
| | - Wei Wang
- Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China
| | - Shujun Fu
- School of Mathematics, Shandong University, Jinan, 250100, China; Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China.
| | - Yuliang Li
- Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China.
| |
Collapse
|
14
|
Li S, Shi S, Fan Z, He X, Zhang N. Deep information-guided feature refinement network for colorectal gland segmentation. Int J Comput Assist Radiol Surg 2023; 18:2319-2328. [PMID: 36934367 DOI: 10.1007/s11548-023-02857-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 02/22/2023] [Indexed: 03/20/2023]
Abstract
PURPOSE Reliable quantification of colorectal histopathological images is based on the precise segmentation of glands but precise segmentation of glands is challenging as glandular morphology varies widely across histological grades, such as malignant glands and non-gland tissues are too similar to be identified, and tightly connected glands are even highly possibly to be incorrectly segmented as one gland. METHODS A deep information-guided feature refinement network is proposed to improve gland segmentation. Specifically, the backbone deepens the network structure to obtain effective features while maximizing the retained information, and a Multi-Scale Fusion module is proposed to increase the receptive field. In addition, to segment dense glands individually, a Multi-Scale Edge-Refined module is designed to strengthen the boundaries of glands. RESULTS The comparative experiments on the eight recently proposed deep learning methods demonstrated that our proposed network has better overall performance and is more competitive on Test B. The F1 score of Test A and Test B is 0.917 and 0.876, respectively; the object-level Dice is 0.921 and 0.884; and the object-level Hausdorff is 43.428 and 87.132, respectively. CONCLUSION The proposed colorectal gland segmentation network can effectively extract features with high representational ability and enhance edge features while retaining details to the maximum, dramatically improving the segmentation performance on malignant glands, and better segmentation results of multi-scale and closed glands can also be obtained.
Collapse
Affiliation(s)
- Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Shuling Shi
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Zhenbang Fan
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Ni Zhang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China.
| |
Collapse
|
15
|
Wu S, Cao Y, Li X, Liu Q, Ye Y, Liu X, Zeng L, Tian M. Attention-guided multi-scale context aggregation network for multi-modal brain glioma segmentation. Med Phys 2023; 50:7629-7640. [PMID: 37151131 DOI: 10.1002/mp.16452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 03/17/2023] [Accepted: 03/20/2023] [Indexed: 05/09/2023] Open
Abstract
BACKGROUND Accurate segmentation of brain glioma is a critical prerequisite for clinical diagnosis, surgical planning and treatment evaluation. In current clinical workflow, physicians typically perform delineation of brain tumor subregions slice-by-slice, which is more susceptible to variabilities in raters and also time-consuming. Besides, even though convolutional neural networks (CNNs) are driving progress, the performance of standard models still have some room for further improvement. PURPOSE To deal with these issues, this paper proposes an attention-guided multi-scale context aggregation network (AMCA-Net) for the accurate segmentation of brain glioma in the magnetic resonance imaging (MRI) images with multi-modalities. METHODS AMCA-Net extracts the multi-scale features from the MRI images and fuses the extracted discriminative features via a self-attention mechanism for brain glioma segmentation. The extraction is performed via a series of down-sampling, convolution layers, and the global context information guidance (GCIG) modules are developed to fuse the features extracted for contextual features. At the end of the down-sampling, a multi-scale fusion (MSF) module is designed to exploit and combine all the extracted multi-scale features. Each of the GCIG and MSF modules contain a channel attention (CA) module that can adaptively calibrate feature responses and emphasize the most relevant features. Finally, multiple predictions with different resolutions are fused through different weightings given by a multi-resolution adaptation (MRA) module instead of the use of averaging or max-pooling to improve the final segmentation results. RESULTS Datasets used in this paper are publicly accessible, that is, the Multimodal Brain Tumor Segmentation Challenges 2018 (BraTS2018) and 2019 (BraTS2019). BraTS2018 contains 285 patient cases and BraTS2019 contains 335 cases. Simulations show that the AMCA-Net has better or comparable performance against that of the other state-of-the-art models. In terms of the Dice score and Hausdorff 95 for the BraTS2018 dataset, 90.4% and 10.2 mm for the whole tumor region (WT), 83.9% and 7.4 mm for the tumor core region (TC), 80.2% and 4.3 mm for the enhancing tumor region (ET), whereas the Dice score and Hausdorff 95 for the BraTS2019 dataset, 91.0% and 10.7 mm for the WT, 84.2% and 8.4 mm for the TC, 80.1% and 4.8 mm for the ET. CONCLUSIONS The proposed AMCA-Net performs comparably well in comparison to several state-of-the-art neural net models in identifying the areas involving the peritumoral edema, enhancing tumor, and necrotic and non-enhancing tumor core of brain glioma, which has great potential for clinical practice. In future research, we will further explore the feasibility of applying AMCA-Net to other similar segmentation tasks.
Collapse
Affiliation(s)
- Shaozhi Wu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yunjian Cao
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| | - Xinke Li
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Qiyu Liu
- Radiology Department, Mianyang Central Hospital, Mianyang, China
| | - Yuyun Ye
- Department of Electrical and Computer Engineering, University of Tulsa, Tulsa, Oklahoma, USA
| | - Xingang Liu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Liaoyuan Zeng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Miao Tian
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
16
|
Opoku M, Weyori BA, Adekoya AF, Adu K. CLAHE-CapsNet: Efficient retina optical coherence tomography classification using capsule networks with contrast limited adaptive histogram equalization. PLoS One 2023; 18:e0288663. [PMID: 38032915 PMCID: PMC10688733 DOI: 10.1371/journal.pone.0288663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 07/01/2023] [Indexed: 12/02/2023] Open
Abstract
Manual detection of eye diseases using retina Optical Coherence Tomography (OCT) images by Ophthalmologists is time consuming, prone to errors and tedious. Previous researchers have developed a computer aided system using deep learning-based convolutional neural networks (CNNs) to aid in faster detection of the retina diseases. However, these methods find it difficult to achieve better classification performance due to noise in the OCT image. Moreover, the pooling operations in CNN reduce resolution of the image that limits the performance of the model. The contributions of the paper are in two folds. Firstly, this paper makes a comprehensive literature review to establish current-state-of-act methods successfully implemented in retina OCT image classifications. Additionally, this paper proposes a capsule network coupled with contrast limited adaptive histogram equalization (CLAHE-CapsNet) for retina OCT image classification. The CLAHE was implemented as layers to minimize the noise in the retina image for better performance of the model. A three-layer convolutional capsule network was designed with carefully chosen hyperparameters. The dataset used for this study was presented by University of California San Diego (UCSD). The dataset consists of 84,495 X-Ray images (JPEG) and 4 categories (NORMAL, CNV, DME, and DRUSEN). The images went through a grading system consisting of multiple layers of trained graders of expertise for verification and correction of image labels. Evaluation experiments were conducted and comparison of results was done with state-of-the-art models to find out the best performing model. The evaluation metrics; accuracy, sensitivity, precision, specificity, and AUC are used to determine the performance of the models. The evaluation results show that the proposed model achieves the best performing model of accuracies of 97.7%, 99.5%, and 99.3% on overall accuracy (OA), overall sensitivity (OS), and overall precision (OP), respectively. The results obtained indicate that the proposed model can be adopted and implemented to help ophthalmologists in detecting retina OCT diseases.
Collapse
Affiliation(s)
- Michael Opoku
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Benjamin Asubam Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Adebayo Felix Adekoya
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Kwabena Adu
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| |
Collapse
|
17
|
Lin Y, Qu Z, Chen H, Gao Z, Li Y, Xia L, Ma K, Zheng Y, Cheng KT. Nuclei segmentation with point annotations from pathology images via self-supervised learning and co-training. Med Image Anal 2023; 89:102933. [PMID: 37611532 DOI: 10.1016/j.media.2023.102933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 07/21/2023] [Accepted: 08/10/2023] [Indexed: 08/25/2023]
Abstract
Nuclei segmentation is a crucial task for whole slide image analysis in digital pathology. Generally, the segmentation performance of fully-supervised learning heavily depends on the amount and quality of the annotated data. However, it is time-consuming and expensive for professional pathologists to provide accurate pixel-level ground truth, while it is much easier to get coarse labels such as point annotations. In this paper, we propose a weakly-supervised learning method for nuclei segmentation that only requires point annotations for training. First, coarse pixel-level labels are derived from the point annotations based on the Voronoi diagram and the k-means clustering method to avoid overfitting. Second, a co-training strategy with an exponential moving average method is designed to refine the incomplete supervision of the coarse labels. Third, a self-supervised visual representation learning method is tailored for nuclei segmentation of pathology images that transforms the hematoxylin component images into the H&E stained images to gain better understanding of the relationship between the nuclei and cytoplasm. We comprehensively evaluate the proposed method using two public datasets. Both visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods, and its competitive performance compared to the fully-supervised methods. Codes are available at https://github.com/hust-linyi/SC-Net.
Collapse
Affiliation(s)
- Yi Lin
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| | - Zhiyong Qu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong; Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| | - Zhongke Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | | | - Lili Xia
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Kai Ma
- Tencent Jarvis Lab, Shenzhen, China
| | | | - Kwang-Ting Cheng
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| |
Collapse
|
18
|
Aswath A, Alsahaf A, Giepmans BNG, Azzopardi G. Segmentation in large-scale cellular electron microscopy with deep learning: A literature survey. Med Image Anal 2023; 89:102920. [PMID: 37572414 DOI: 10.1016/j.media.2023.102920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 07/05/2023] [Accepted: 07/31/2023] [Indexed: 08/14/2023]
Abstract
Electron microscopy (EM) enables high-resolution imaging of tissues and cells based on 2D and 3D imaging techniques. Due to the laborious and time-consuming nature of manual segmentation of large-scale EM datasets, automated segmentation approaches are crucial. This review focuses on the progress of deep learning-based segmentation techniques in large-scale cellular EM throughout the last six years, during which significant progress has been made in both semantic and instance segmentation. A detailed account is given for the key datasets that contributed to the proliferation of deep learning in 2D and 3D EM segmentation. The review covers supervised, unsupervised, and self-supervised learning methods and examines how these algorithms were adapted to the task of segmenting cellular and sub-cellular structures in EM images. The special challenges posed by such images, like heterogeneity and spatial complexity, and the network architectures that overcame some of them are described. Moreover, an overview of the evaluation measures used to benchmark EM datasets in various segmentation tasks is provided. Finally, an outlook of current trends and future prospects of EM segmentation is given, especially with large-scale models and unlabeled images to learn generic features across EM datasets.
Collapse
Affiliation(s)
- Anusha Aswath
- Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, University Groningen, Groningen, The Netherlands; Department of Biomedical Sciences of Cells and Systems, University Groningen, University Medical Center Groningen, Groningen, The Netherlands.
| | - Ahmad Alsahaf
- Department of Biomedical Sciences of Cells and Systems, University Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Ben N G Giepmans
- Department of Biomedical Sciences of Cells and Systems, University Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - George Azzopardi
- Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, University Groningen, Groningen, The Netherlands
| |
Collapse
|
19
|
Das R, Bose S, Chowdhury RS, Maulik U. Dense Dilated Multi-Scale Supervised Attention-Guided Network for histopathology image segmentation. Comput Biol Med 2023; 163:107182. [PMID: 37379615 DOI: 10.1016/j.compbiomed.2023.107182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/24/2023] [Accepted: 06/13/2023] [Indexed: 06/30/2023]
Abstract
Over the last couple of decades, the introduction and proliferation of whole-slide scanners led to increasing interest in the research of digital pathology. Although manual analysis of histopathological images is still the gold standard, the process is often tedious and time consuming. Furthermore, manual analysis also suffers from intra- and interobserver variability. Separating structures or grading morphological changes can be difficult due to architectural variability of these images. Deep learning techniques have shown great potential in histopathology image segmentation that drastically reduces the time needed for downstream tasks of analysis and providing accurate diagnosis. However, few algorithms have clinical implementations. In this paper, we propose a new deep learning model Dense Dilated Multiscale Supervised Attention-Guided (D2MSA) Network for histopathology image segmentation that makes use of deep supervision coupled with a hierarchical system of novel attention mechanisms. The proposed model surpasses state-of-the-art performance while using similar computational resources. The performance of the model has been evaluated for the tasks of gland segmentation and nuclei instance segmentation, both of which are clinically relevant tasks to assess the state and progress of malignancy. Here, we have used histopathology image datasets for three different types of cancer. We have also performed extensive ablation tests and hyperparameter tuning to ensure the validity and reproducibility of the model performance. The proposed model is available at www.github.com/shirshabose/D2MSA-Net.
Collapse
Affiliation(s)
- Rangan Das
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Shirsha Bose
- Department of Informatics, Technical University of Munich, Munich, Bavaria 85748, Germany.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
20
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
21
|
Shiffman S, Rios Piedra EA, Adedeji AO, Ruff CF, Andrews RN, Katavolos P, Liu E, Forster A, Brumm J, Fuji RN, Sullivan R. Analysis of cellularity in H&E-stained rat bone marrow tissue via deep learning. J Pathol Inform 2023; 14:100333. [PMID: 37743975 PMCID: PMC10514468 DOI: 10.1016/j.jpi.2023.100333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 08/19/2023] [Indexed: 09/26/2023] Open
Abstract
Our objective was to develop an automated deep-learning-based method to evaluate cellularity in rat bone marrow hematoxylin and eosin whole slide images for preclinical safety assessment. We trained a shallow CNN for segmenting marrow, 2 Mask R-CNN models for segmenting megakaryocytes (MKCs), and small hematopoietic cells (SHCs), and a SegNet model for segmenting red blood cells. We incorporated the models into a pipeline that identifies and counts MKCs and SHCs in rat bone marrow. We compared cell segmentation and counts that our method generated to those that pathologists generated on 10 slides with a range of cell depletion levels from 10 studies. For SHCs, we compared cell counts that our method generated to counts generated by Cellpose and Stardist. The median Dice and object Dice scores for MKCs using our method vs pathologist consensus and the inter- and intra-pathologist variation were comparable, with overlapping first-third quartile ranges. For SHCs, the median scores were close, with first-third quartile ranges partially overlapping intra-pathologist variation. For SHCs, in comparison to Cellpose and Stardist, counts from our method were closer to pathologist counts, with a smaller 95% limits of agreement range. The performance of the bone marrow analysis pipeline supports its incorporation into routine use as an aid for hematotoxicity assessment by pathologists. The pipeline could help expedite hematotoxicity assessment in preclinical studies and consequently could expedite drug development. The method may enable meta-analysis of rat bone marrow characteristics from future and historical whole slide images and may generate new biological insights from cross-study comparisons.
Collapse
Affiliation(s)
- Smadar Shiffman
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Edgar A. Rios Piedra
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Adeyemi O. Adedeji
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Catherine F. Ruff
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Rachel N. Andrews
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Paula Katavolos
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
- Bristol Myers Squibb, New Brunswick, NJ 08901, USA
| | - Evan Liu
- Genentech Research and Early Development (gRED), Department of Development Sciences Informatics, Genentech Inc, South San Francisco, USA
| | - Ashley Forster
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
- University of Pennsylvania School of Veterinary Medicine, Philadelphia, PA 19104, USA
| | - Jochen Brumm
- Genentech Research and Early Development (gRED), Department of Nonclinical Biostatistics, Genentech Inc, South San Francisco, USA
| | - Reina N. Fuji
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Ruth Sullivan
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| |
Collapse
|
22
|
Jain Y, Godwin LL, Ju Y, Sood N, Quardokus EM, Bueckle A, Longacre T, Horning A, Lin Y, Esplin ED, Hickey JW, Snyder MP, Patterson NH, Spraggins JM, Börner K. Segmentation of human functional tissue units in support of a Human Reference Atlas. Commun Biol 2023; 6:717. [PMID: 37468557 PMCID: PMC10356924 DOI: 10.1038/s42003-023-04848-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/17/2023] [Indexed: 07/21/2023] Open
Abstract
The Human BioMolecular Atlas Program (HuBMAP) aims to compile a Human Reference Atlas (HRA) for the healthy adult body at the cellular level. Functional tissue units (FTUs), relevant for HRA construction, are of pathobiological significance. Manual segmentation of FTUs does not scale; highly accurate and performant, open-source machine-learning algorithms are needed. We designed and hosted a Kaggle competition that focused on development of such algorithms and 1200 teams from 60 countries participated. We present the competition outcomes and an expanded analysis of the winning algorithms on additional kidney and colon tissue data, and conduct a pilot study to understand spatial location and density of FTUs across the kidney. The top algorithm from the competition, Tom, outperforms other algorithms in the expanded study, while using fewer computational resources. Tom was added to the HuBMAP infrastructure to run kidney FTU segmentation at scale-showcasing the value of Kaggle competitions for advancing research.
Collapse
Affiliation(s)
- Yashvardhan Jain
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA.
| | - Leah L Godwin
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Yingnan Ju
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Naveksha Sood
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Ellen M Quardokus
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Andreas Bueckle
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Teri Longacre
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Aaron Horning
- Thermo Fisher Scientific, South San Francisco, CA, 94080, USA
| | - Yiing Lin
- Department of Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Edward D Esplin
- Department of Genetics, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - John W Hickey
- Department of Microbiology & Immunology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Michael P Snyder
- Department of Genetics, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | | | - Jeffrey M Spraggins
- Mass Spectrometry Research Center, Vanderbilt University, Nashville, TN, 37232, USA
- Department of Cell and Developmental Biology, Vanderbilt University, Nashville, TN, 37232, USA
| | - Katy Börner
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA.
| |
Collapse
|
23
|
Subramanya SK, Li R, Wang Y, Miyamoto H, Cui F. Deep learning for histopathological segmentation of smooth muscle in the urinary bladder. BMC Med Inform Decis Mak 2023; 23:122. [PMID: 37454065 PMCID: PMC10349433 DOI: 10.1186/s12911-023-02222-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 07/03/2023] [Indexed: 07/18/2023] Open
Abstract
BACKGROUND Histological assessment of smooth muscle is a critical step particularly in staging malignant tumors in various internal organs including the urinary bladder. Nonetheless, manual segmentation and classification of muscular tissues by pathologists is often challenging. Therefore, a fully automated and reliable smooth muscle image segmentation system is in high demand. METHODS To characterize muscle fibers in the urinary bladder, including muscularis mucosa (MM) and muscularis propria (MP), we assessed 277 histological images from surgical specimens, using two well-known deep learning (DL) model groups, one including VGG16, ResNet18, SqueezeNet, and MobileNetV2, considered as a patch-based approach, and the other including U-Net, MA-Net, DeepLabv3 + , and FPN, considered as a pixel-based approach. All the trained models in both the groups were evaluated at pixel-level for their performance. RESULTS For segmenting MP and non-MP (including MM) regions, MobileNetV2, in the patch-based approach and U-Net, in the pixel-based approach outperformed their peers in the groups with mean Jaccard Index equal to 0.74 and 0.79, and mean Dice co-efficient equal to 0.82 and 0.88, respectively. We also demonstrated the strengths and weaknesses of the models in terms of speed and prediction accuracy. CONCLUSIONS This work not only creates a benchmark for future development of tools for the histological segmentation of smooth muscle but also provides an effective DL-based diagnostic system for accurate pathological staging of bladder cancer.
Collapse
Affiliation(s)
- Sridevi K Subramanya
- Thomas H. Gosnell School of Life Sciences, Rochester Institute of Technology, 1 Lomb Memorial Drive, Rochester, NY, 14623, USA
| | - Rui Li
- Golisano College of Computing and Information Sciences, Rochester Institute of Technology, 20 Lomb Memorial Drive, Rochester, NY, 14623, USA
| | - Ying Wang
- Department of Pathology and Laboratory Medicine, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA
| | - Hiroshi Miyamoto
- Department of Pathology and Laboratory Medicine, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA.
| | - Feng Cui
- Thomas H. Gosnell School of Life Sciences, Rochester Institute of Technology, 1 Lomb Memorial Drive, Rochester, NY, 14623, USA.
| |
Collapse
|
24
|
Meng Z, Wang G, Su F, Liu Y, Wang Y, Yang J, Luo J, Cao F, Zhen P, Huang B, Yin Y, Zhao Z, Guo L. A Deep Learning-Based System Trained for Gastrointestinal Stromal Tumor Screening Can Identify Multiple Types of Soft Tissue Tumors. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:899-912. [PMID: 37068638 DOI: 10.1016/j.ajpath.2023.03.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/26/2023] [Accepted: 03/28/2023] [Indexed: 04/19/2023]
Abstract
The accuracy and timeliness of the pathologic diagnosis of soft tissue tumors (STTs) critically affect treatment decision and patient prognosis. Thus, it is crucial to make a preliminary judgement on whether the tumor is benign or malignant with hematoxylin and eosin-stained images. A deep learning-based system, Soft Tissue Tumor Box (STT-BOX), is presented herein, with only hematoxylin and eosin images for malignant STT identification from benign STTs with histopathologic similarity. STT-BOX assumed gastrointestinal stromal tumor as a baseline for malignant STT evaluation, and distinguished gastrointestinal stromal tumor from leiomyoma and schwannoma with 100% area under the curve in patients from three hospitals, which achieved higher accuracy than the interpretation of experienced pathologists. Particularly, this system performed well on six common types of malignant STTs from The Cancer Genome Atlas data set, accurately highlighting the malignant mass lesion. STT-BOX was able to distinguish ovarian malignant sex-cord stromal tumors without any fine-tuning. This study included mesenchymal tumors that originated from the digestive system, bone and soft tissues, and reproductive system, where the high accuracy of migration verification may reveal the morphologic similarity of the nine types of malignant tumors. Further evaluation in a pan-STT setting would be potential and prospective, obviating the overuse of immunohistochemistry and molecular tests, and providing a practical basis for clinical treatment selection in a timely manner.
Collapse
Affiliation(s)
- Zhu Meng
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Guangxi Wang
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Fei Su
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China; Beijing Key Laboratory of Network System and Network Culture, Beijing, China
| | - Yan Liu
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Yuxiang Wang
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Jing Yang
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Jianyuan Luo
- Department of Medical Genetics, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Fang Cao
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Panpan Zhen
- Department of Pathology, Beijing Luhe Hospital, Capital Medical University, Beijing, China
| | - Binhua Huang
- Department of Pathology, Dongguan Houjie Hospital, Dongguan, China
| | - Yuxin Yin
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Zhicheng Zhao
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China; Beijing Key Laboratory of Network System and Network Culture, Beijing, China.
| | - Limei Guo
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China.
| |
Collapse
|
25
|
Torres HR, Oliveira B, Fonseca JC, Morais P, Vilaca JL. Dual consistency loss for contour-aware segmentation in medical images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082637 DOI: 10.1109/embc40787.2023.10340931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Medical image segmentation is a paramount task for several clinical applications, namely for the diagnosis of pathologies, for treatment planning, and for aiding image-guided surgeries. With the development of deep learning, Convolutional Neural Networks (CNN) have become the state-of-the-art for medical image segmentation. However, issues are still raised concerning the precise object boundary delineation, since traditional CNNs can produce non-smooth segmentations with boundary discontinuities. In this work, a U-shaped CNN architecture is proposed to generate both pixel-wise segmentation and probabilistic contour maps of the object to segment, in order to generate reliable segmentations at the object's boundaries. Moreover, since the segmentation and contour maps must be inherently related to each other, a dual consistency loss that relates the two outputs of the network is proposed. Thus, the network is enforced to consistently learn the segmentation and contour delineation tasks during the training. The proposed method was applied and validated on a public dataset of cardiac 3D ultrasound images of the left ventricle. The results obtained showed the good performance of the method and its applicability for the cardiac dataset, showing its potential to be used in clinical practice for medical image segmentation.Clinical Relevance- The proposed network with dual consistency loss scheme can improve the performance of state-of-the-art CNNs for medical image segmentation, proving its value to be applied for computer-aided diagnosis.
Collapse
|
26
|
Wang J, Li S, Yu L, Qu A, Wang Q, Liu J, Wu Q. SDPN: A Slight Dual-Path Network With Local-Global Attention Guided for Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:2956-2967. [PMID: 37030687 DOI: 10.1109/jbhi.2023.3260026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2023]
Abstract
Accurate identification of lesions is a key step in surgical planning. However, this task mainly exists two challenges: 1) Due to the complex anatomical shapes of different lesions, most segmentation methods only achieve outstanding performance for a specific structure, rather than other lesions with location differences. 2) The huge number of parameters limits existing transformer-based segmentation models. To overcome these problems, we propose a novel slight dual-path network (SDPN) to segment variable location lesions or organs with significant differences accurately. First, we design a dual-path module to integrate local with global features without obvious memory consumption. Second, a novel Multi-spectrum attention module is proposed to pay further attention to detailed information, which can automatically adapt to the variable segmentation target. Then, the compression module based on tensor ring decomposition is designed to compress convolutional and transformer structures. In the experiment, four datasets, including three benchmark datasets and a clinical dataset, are used to evaluate SDPN. Results of the experiments show that SDPN performs better than other start-of-the-art methods for brain tumor, liver tumor, endometrial tumor and cardiac segmentation. To ensure the generalizability, we train the network on Kvasir-SEG and test on CVC-ClinicDB which collected from a different institution. The quantitative analysis shows that the clinical evaluation results are consistent with the experts. Therefore, this model may be a potential candidate for the segmentation of lesions and organs segmentation with variable locations in clinical applications.
Collapse
|
27
|
Olory Agomma R, Cresson T, de Guise J, Vazquez C. Automatic lower limb bone segmentation in radiographs with different orientations and fields of view based on a contextual network. Int J Comput Assist Radiol Surg 2023; 18:641-651. [PMID: 36463545 DOI: 10.1007/s11548-022-02798-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 11/16/2022] [Indexed: 12/05/2022]
Abstract
PURPOSE Bone identification and segmentation in X-ray images are crucial in orthopedics for the automation of clinical procedures, but it often involves some manual operations. In this work, using a modified SegNet neural network, we automatically identify and segment lower limb bone structures on radiographs presenting various fields of view and different patient orientations. METHODS A wide contextual neural network architecture is proposed to perform a high-quality pixel-wise semantic segmentation on X-ray images presenting structures with a similar appearance and strong superposition. The proposed architecture is based on the premise that every output pixel on the label map has a wide receptive field. This allows the network to capture both global and local contextual information. The overlapping between structures is handled with additional labels. RESULTS The proposed approach was evaluated on a test dataset composed of 70 radiographs with entire and partial bones. We obtained an average detection rate of 98.00% and an average Dice coefficient of 95.25 ± 9.02% across all classes. For the challenging subset of images with high superposition, we obtained an average detection rate of 96.36% and an average Dice coefficient of 93.81 ± 10.03% across all classes. CONCLUSION The results show the effectiveness of the proposed approach in segmenting and identifying lower limb bone structures and overlapping structures in radiographs with strong bone superposition and highly variable configurations, as well as in radiographs containing only small pieces of bone structures.
Collapse
Affiliation(s)
- Roseline Olory Agomma
- Laboratoire de recherche en imagerie et orthopédie, 900 Saint-Denis Street, Montreal, QC, Canada.
- École de technologie supérieure, 1100 Notre-Dame St. West, Montreal, QC, Canada.
- Centre de recherche du CHUM, 900 Saint-Denis Street, Montreal, QC, Canada.
| | - Thierry Cresson
- Laboratoire de recherche en imagerie et orthopédie, 900 Saint-Denis Street, Montreal, QC, Canada
- École de technologie supérieure, 1100 Notre-Dame St. West, Montreal, QC, Canada
- Centre de recherche du CHUM, 900 Saint-Denis Street, Montreal, QC, Canada
| | - Jacques de Guise
- Laboratoire de recherche en imagerie et orthopédie, 900 Saint-Denis Street, Montreal, QC, Canada
- École de technologie supérieure, 1100 Notre-Dame St. West, Montreal, QC, Canada
- Centre de recherche du CHUM, 900 Saint-Denis Street, Montreal, QC, Canada
| | - Carlos Vazquez
- Laboratoire de recherche en imagerie et orthopédie, 900 Saint-Denis Street, Montreal, QC, Canada
- École de technologie supérieure, 1100 Notre-Dame St. West, Montreal, QC, Canada
| |
Collapse
|
28
|
Lou W, Li H, Li G, Han X, Wan X. Which Pixel to Annotate: A Label-Efficient Nuclei Segmentation Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:947-958. [PMID: 36355729 DOI: 10.1109/tmi.2022.3221666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H&E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5% pixels on some benchmarks.
Collapse
|
29
|
Karri M, Annavarapu CSR, Acharya UR. Skin lesion segmentation using two-phase cross-domain transfer learning framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107408. [PMID: 36805279 DOI: 10.1016/j.cmpb.2023.107408] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/31/2023] [Accepted: 02/04/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning (DL) models have been used for medical imaging for a long time but they did not achieve their full potential in the past because of insufficient computing power and scarcity of training data. In recent years, we have seen substantial growth in DL networks because of improved technology and an abundance of data. However, previous studies indicate that even a well-trained DL algorithm may struggle to generalize data from multiple sources because of domain shifts. Additionally, ineffectiveness of basic data fusion methods, complexity of segmentation target and low interpretability of current DL models limit their use in clinical decisions. To meet these challenges, we present a new two-phase cross-domain transfer learning system for effective skin lesion segmentation from dermoscopic images. METHODS Our system is based on two significant technical inventions. We examine a two- phase cross-domain transfer learning approach, including model-level and data-level transfer learning, by fine-tuning the system on two datasets, MoleMap and ImageNet. We then present nSknRSUNet, a high-performing DL network, for skin lesion segmentation using broad receptive fields and spatial edge attention feature fusion. We examine the trained model's generalization capabilities on skin lesion segmentation to quantify these two inventions. We cross-examine the model using two skin lesion image datasets, MoleMap and HAM10000, obtained from varied clinical contexts. RESULTS At data-level transfer learning for the HAM10000 dataset, the proposed model obtained 94.63% of DSC and 99.12% accuracy. In cross-examination at data-level transfer learning for the Molemap dataset, the proposed model obtained 93.63% of DSC and 97.01% of accuracy. CONCLUSION Numerous experiments reveal that our system produces excellent performance and improves upon state-of-the-art methods on both qualitative and quantitative measures.
Collapse
Affiliation(s)
- Meghana Karri
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - Chandra Sekhara Rao Annavarapu
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of science and Technology, SUSS university, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia university, Taichung, Taiwan.
| |
Collapse
|
30
|
Cao Z, Chen F, Grais EM, Yue F, Cai Y, Swanepoel DW, Zhao F. Machine Learning in Diagnosing Middle Ear Disorders Using Tympanic Membrane Images: A Meta-Analysis. Laryngoscope 2023; 133:732-741. [PMID: 35848851 DOI: 10.1002/lary.30291] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 06/18/2022] [Accepted: 06/21/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE To systematically evaluate the development of Machine Learning (ML) models and compare their diagnostic accuracy for the classification of Middle Ear Disorders (MED) using Tympanic Membrane (TM) images. METHODS PubMed, EMBASE, CINAHL, and CENTRAL were searched up until November 30, 2021. Studies on the development of ML approaches for diagnosing MED using TM images were selected according to the inclusion criteria. PRISMA guidelines were followed with study design, analysis method, and outcomes extracted. Sensitivity, specificity, and area under the curve (AUC) were used to summarize the performance metrics of the meta-analysis. Risk of Bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 tool in combination with the Prediction Model Risk of Bias Assessment Tool. RESULTS Sixteen studies were included, encompassing 20254 TM images (7025 normal TM and 13229 MED). The sample size ranged from 45 to 6066 per study. The accuracy of the 25 included ML approaches ranged from 76.00% to 98.26%. Eleven studies (68.8%) were rated as having a low risk of bias, with the reference standard as the major domain of high risk of bias (37.5%). Sensitivity and specificity were 93% (95% CI, 90%-95%) and 85% (95% CI, 82%-88%), respectively. The AUC of total TM images was 94% (95% CI, 91%-96%). The greater AUC was found using otoendoscopic images than otoscopic images. CONCLUSIONS ML approaches perform robustly in distinguishing between normal ears and MED, however, it is proposed that a standardized TM image acquisition and annotation protocol should be developed. LEVEL OF EVIDENCE NA Laryngoscope, 133:732-741, 2023.
Collapse
Affiliation(s)
- Zuwei Cao
- Center for Rehabilitative Auditory Research, Guizhou Provincial People's Hospital, Guiyang City, China
| | - Feifan Chen
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Emad M Grais
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Fengjuan Yue
- Medical Examination Center, Guizhou Provincial People's Hospital, Guiyang City, China
| | - Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, China
| | - De Wet Swanepoel
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
| | - Fei Zhao
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| |
Collapse
|
31
|
Rothman JS, Borges-Merjane C, Holderith N, Jonas P, Silver RA. Validation of a stereological method for estimating particle size and density from 2D projections with high accuracy. PLoS One 2023; 18:e0277148. [PMID: 36930689 PMCID: PMC10022809 DOI: 10.1371/journal.pone.0277148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 03/02/2023] [Indexed: 03/18/2023] Open
Abstract
Stereological methods for estimating the 3D particle size and density from 2D projections are essential to many research fields. These methods are, however, prone to errors arising from undetected particle profiles due to sectioning and limited resolution, known as 'lost caps'. A potential solution developed by Keiding, Jensen, and Ranek in 1972, which we refer to as the Keiding model, accounts for lost caps by quantifying the smallest detectable profile in terms of its limiting 'cap angle' (ϕ), a size-independent measure of a particle's distance from the section surface. However, this simple solution has not been widely adopted nor tested. Rather, model-independent design-based stereological methods, which do not explicitly account for lost caps, have come to the fore. Here, we provide the first experimental validation of the Keiding model by comparing the size and density of particles estimated from 2D projections with direct measurement from 3D EM reconstructions of the same tissue. We applied the Keiding model to estimate the size and density of somata, nuclei and vesicles in the cerebellum of mice and rats, where high packing density can be problematic for design-based methods. Our analysis reveals a Gaussian distribution for ϕ rather than a single value. Nevertheless, curve fits of the Keiding model to the 2D diameter distribution accurately estimate the mean ϕ and 3D diameter distribution. While systematic testing using simulations revealed an upper limit to determining ϕ, our analysis shows that estimated ϕ can be used to determine the 3D particle density from the 2D density under a wide range of conditions, and this method is potentially more accurate than minimum-size-based lost-cap corrections and disector methods. Our results show the Keiding model provides an efficient means of accurately estimating the size and density of particles from 2D projections even under conditions of a high density.
Collapse
Affiliation(s)
- Jason Seth Rothman
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | | | - Noemi Holderith
- Laboratory of Cellular Neurophysiology, Institute of Experimental Medicine, Budapest, Hungary
| | - Peter Jonas
- Cellular Neuroscience, Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - R. Angus Silver
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| |
Collapse
|
32
|
Guo R, Xie K, Pagnucco M, Song Y. SAC-Net: Learning with weak and noisy labels in histopathology image segmentation. Med Image Anal 2023; 86:102790. [PMID: 36878159 DOI: 10.1016/j.media.2023.102790] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 11/24/2022] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
Deep convolutional neural networks have been highly effective in segmentation tasks. However, segmentation becomes more difficult when training images include many complex instances to segment, such as the task of nuclei segmentation in histopathology images. Weakly supervised learning can reduce the need for large-scale, high-quality ground truth annotations by involving non-expert annotators or algorithms to generate supervision information for segmentation. However, there is still a significant performance gap between weakly supervised learning and fully supervised learning approaches. In this work, we propose a weakly-supervised nuclei segmentation method in a two-stage training manner that only requires annotation of the nuclear centroids. First, we generate boundary and superpixel-based masks as pseudo ground truth labels to train our SAC-Net, which is a segmentation network enhanced by a constraint network and an attention network to effectively address the problems caused by noisy labels. Then, we refine the pseudo labels at the pixel level based on Confident Learning to train the network again. Our method shows highly competitive performance of cell nuclei segmentation in histopathology images on three public datasets. Code will be available at: https://github.com/RuoyuGuo/MaskGA_Net.
Collapse
Affiliation(s)
- Ruoyu Guo
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Kunzi Xie
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Maurice Pagnucco
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Australia.
| |
Collapse
|
33
|
Monitoring Visual Properties of Food in Real Time During Food Drying. FOOD ENGINEERING REVIEWS 2023. [DOI: 10.1007/s12393-023-09334-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
|
34
|
A Heuristic Machine Learning-Based Optimization Technique to Predict Lung Cancer Patient Survival. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4506488. [PMID: 36776617 PMCID: PMC9911240 DOI: 10.1155/2023/4506488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/26/2022] [Accepted: 11/24/2022] [Indexed: 02/05/2023]
Abstract
Cancer has been a significant threat to human health and well-being, posing the biggest obstacle in the history of human sickness. The high death rate in cancer patients is primarily due to the complexity of the disease and the wide range of clinical outcomes. Increasing the accuracy of the prediction is equally crucial as predicting the survival rate of cancer patients, which has become a key issue of cancer research. Many models have been suggested at the moment. However, most of them simply use single genetic data or clinical data to construct prediction models for cancer survival. There is a lot of emphasis in present survival studies on determining whether or not a patient will survive five years. The personal issue of how long a lung cancer patient will survive remains unanswered. The proposed technique Naive Bayes and SSA is estimating the overall survival time with lung cancer. Two machine learning challenges are derived from a single customized query. To begin with, determining whether a patient will survive for more than five years is a simple binary question. The second step is to develop a five-year survival model using regression analysis. When asked to forecast how long a lung cancer patient would survive within five years, the mean absolute error (MAE) of this technique's predictions is accurate within a month. Several biomarker genes have been associated with lung cancers. The accuracy, recall, and precision achieved from this algorithm are 98.78%, 98.4%, and 98.6%, respectively.
Collapse
|
35
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
36
|
Zhang W, Zhang J, Wang X, Yang S, Huang J, Yang W, Wang W, Han X. Merging nucleus datasets by correlation-based cross-training. Med Image Anal 2023; 84:102705. [PMID: 36525843 DOI: 10.1016/j.media.2022.102705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 11/16/2022] [Accepted: 11/24/2022] [Indexed: 12/12/2022]
Abstract
Fine-grained nucleus classification is challenging because of the high inter-class similarity and intra-class variability. Therefore, a large number of labeled data is required for training effective nucleus classification models. However, it is challenging to label a large-scale nucleus classification dataset comparable to ImageNet in natural images, considering that high-quality nucleus labeling requires specific domain knowledge. In addition, the existing publicly available datasets are often inconsistently labeled with divergent labeling criteria. Due to this inconsistency, conventional models have to be trained on each dataset separately and work independently to infer their own classification results, limiting their classification performance. To fully utilize all annotated datasets, we formulate the nucleus classification task as a multi-label problem with missing labels to utilize all datasets in a unified framework. Specifically, we merge all datasets and combine their labels as multiple labels. Thus, each data has one ground-truth label and several missing labels. We devise a base classification module that is trained using all data but sparsely supervised by the ground-truth labels only. We then exploit the correlation among different label sets by a label correlation module. By doing so, we can have two trained basic modules and further cross-train them with both ground-truth labels and pseudo labels for the missing ones. Importantly, data without any ground-truth labels can also be involved in our framework, as we can regard them as data with all labels missing and generate the corresponding pseudo labels. We carefully re-organized multiple publicly available nucleus classification datasets, converted them into a uniform format, and tested the proposed framework on them. Experimental results show substantial improvement compared to the state-of-the-art methods. The code and data are available at https://w-h-zhang.github.io/projects/dataset_merging/dataset_merging.html.
Collapse
Affiliation(s)
- Wenhua Zhang
- Department of Computer Science, The University of Hong Kong, Hong Kong, China
| | - Jun Zhang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | - Xiyue Wang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | - Sen Yang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | | | - Wei Yang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | | | - Xiao Han
- Tencent AI Lab, Shen Zhen, Guang Dong, China.
| |
Collapse
|
37
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
38
|
Foucart A, Debeir O, Decaestecker C. Shortcomings and areas for improvement in digital pathology image segmentation challenges. Comput Med Imaging Graph 2023; 103:102155. [PMID: 36525770 DOI: 10.1016/j.compmedimag.2022.102155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/13/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022]
Abstract
Digital pathology image analysis challenges have been organised regularly since 2010, often with events hosted at major conferences and results published in high-impact journals. These challenges mobilise a lot of energy from organisers, participants, and expert annotators (especially for image segmentation challenges). This study reviews image segmentation challenges in digital pathology and the top-ranked methods, with a particular focus on how reference annotations are generated and how the methods' predictions are evaluated. We found important shortcomings in the handling of inter-expert disagreement and the relevance of the evaluation process chosen. We also noted key problems with the quality control of various challenge elements that can lead to uncertainties in the published results. Our findings show the importance of greatly increasing transparency in the reporting of challenge results, and the need to make publicly available the evaluation codes, test set annotations and participants' predictions. The aim is to properly ensure the reproducibility and interpretation of the results and to increase the potential for exploitation of the substantial work done in these challenges.
Collapse
Affiliation(s)
- Adrien Foucart
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium.
| | - Olivier Debeir
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium.
| |
Collapse
|
39
|
Mukherjee S, Sarkar R, Manich M, Labruyere E, Olivo-Marin JC. Domain Adapted Multitask Learning for Segmenting Amoeboid Cells in Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:42-54. [PMID: 36044485 DOI: 10.1109/tmi.2022.3203022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The method proposed in this paper is a robust combination of multi-task learning and unsupervised domain adaptation for segmenting amoeboid cells in microscopy. A highlight of this work is the manner in which the model's hyperparameters are estimated. The detriments of ad-hoc parameter estimation are well known, but this issue remains largely unaddressed in the context of CNN-based segmentation. Using a novel min-max formulation of the segmentation cost function our proposed method analytically estimates the model's hyperparameters, while simultaneously learning the CNN weights during training. This end-to-end framework provides a consolidated mechanism to harness the potential of multi-task learning to isolate and segment clustered cells from low contrast brightfield images, and it simultaneously leverages deep domain adaptation to segment fluorescent cells without explicit pixel-level re- annotation of the data. Experimental validations on multi-cellular images strongly suggest the effectiveness of the proposed technique, and our quantitative results show at least 15% and 10% improvement in cell segmentation on brightfield and fluorescence images respectively compared to contemporary supervised segmentation methods.
Collapse
|
40
|
Dogar GM, Shahzad M, Fraz MM. Attention augmented distance regression and classification network for nuclei instance segmentation and type classification in histology images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
41
|
Graham S, Vu QD, Jahanifar M, Raza SEA, Minhas F, Snead D, Rajpoot N. One model is all you need: Multi-task learning enables simultaneous histology image segmentation and classification. Med Image Anal 2023; 83:102685. [PMID: 36410209 DOI: 10.1016/j.media.2022.102685] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 10/20/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022]
Abstract
The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.
Collapse
Affiliation(s)
- Simon Graham
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom; Histofy Ltd, United Kingdom.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - David Snead
- Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom; Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom
| |
Collapse
|
42
|
Computer-aided detection and prognosis of colorectal cancer on whole slide images using dual resolution deep learning. J Cancer Res Clin Oncol 2023; 149:91-101. [PMID: 36331654 DOI: 10.1007/s00432-022-04435-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 10/18/2022] [Indexed: 11/06/2022]
Abstract
PURPOSE Rapid diagnosis and risk stratification can provide timely treatment for colorectal cancer (CRC) patients. Deep learning (DL) is not only used to identify tumor regions in histopathological images, but also applied to predict survival and achieve risk stratification. Whereas, most of methods dependent on regions of interest annotated by pathologist and ignore the global information in the image. METHODS A dual resolution DL network based on weakly supervised learning (WDRNet) was proposed for CRC identification and prognosis. The proposed method was trained and validated on the dataset from The Cancer Genome Atlas (TCGA) and tested on the external dataset from Affiliated Cancer Hospital and Institute of Guangzhou Medical University (ACHIGMU). RESULTS In identification task, WDRNet accurately identified tumor images with an accuracy of 0.977 in slide level and 0.953 in patch level. Besides, in prognosis task, WDRNet showed an excellent prediction performance in both datasets with the concordance index (C-index) of 0.716 ± 0.037 and 0.598 ± 0.024 respectively. Moreover, the results of risk stratification were statistically significant in univariate analysis (p < 0.001, HR = 7.892 in TCGA-CRC, and p = 0.009, HR = 1.718 in ACHIGMU) and multivariate analysis (p < 0.001, HR = 5.914 in TCGA-CRC, and p = 0.025, HR = 1.674 in ACHIGMU). CONCLUSIONS We developed a weakly supervised resolution DL network to achieve precise identification and prognosis of CRC patients, which will assist doctors in diagnosis on histopathological images and stratify patients to select appropriate therapeutic schedule.
Collapse
|
43
|
Barmpoutis P, Waddingham W, Yuan J, Ross C, Kayhanian H, Stathaki T, Alexander DC, Jansen M. A digital pathology workflow for the segmentation and classification of gastric glands: Study of gastric atrophy and intestinal metaplasia cases. PLoS One 2022; 17:e0275232. [PMID: 36584163 PMCID: PMC9803139 DOI: 10.1371/journal.pone.0275232] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 09/12/2022] [Indexed: 01/01/2023] Open
Abstract
Gastric cancer is one of the most frequent causes of cancer-related deaths worldwide. Gastric atrophy (GA) and gastric intestinal metaplasia (IM) of the mucosa of the stomach have been found to increase the risk of gastric cancer and are considered precancerous lesions. Therefore, the early detection of GA and IM may have a valuable role in histopathological risk assessment. However, GA and IM are difficult to confirm endoscopically and, following the Sydney protocol, their diagnosis depends on the analysis of glandular morphology and on the identification of at least one well-defined goblet cell in a set of hematoxylin and eosin (H&E) -stained biopsy samples. To this end, the precise segmentation and classification of glands from the histological images plays an important role in the diagnostic confirmation of GA and IM. In this paper, we propose a digital pathology end-to-end workflow for gastric gland segmentation and classification for the analysis of gastric tissues. The proposed GAGL-VTNet, initially, extracts both global and local features combining multi-scale feature maps for the segmentation of glands and, subsequently, it adopts a vision transformer that exploits the visual dependences of the segmented glands towards their classification. For the analysis of gastric tissues, segmentation of mucosa is performed through an unsupervised model combining energy minimization and a U-Net model. Then, features of the segmented glands and mucosa are extracted and analyzed. To evaluate the efficiency of the proposed methodology we created the GAGL dataset consisting of 85 WSI, collected from 20 patients. The results demonstrate the existence of significant differences of the extracted features between normal, GA and IM cases. The proposed approach for gland and mucosa segmentation achieves an object dice score equal to 0.908 and 0.967 respectively, while for the classification of glands it achieves an F1 score equal to 0.94 showing great potential for the automated quantification and analysis of gastric biopsies.
Collapse
Affiliation(s)
- Panagiotis Barmpoutis
- Department of Computer Science, Centre for Medical Image Computing, University College London, London, United Kingdom
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - William Waddingham
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - Jing Yuan
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Christopher Ross
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - Hamzeh Kayhanian
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - Tania Stathaki
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Daniel C. Alexander
- Department of Computer Science, Centre for Medical Image Computing, University College London, London, United Kingdom
| | - Marnix Jansen
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| |
Collapse
|
44
|
Karri M, Annavarapu CSR, Acharya UR. Explainable multi-module semantic guided attention based network for medical image segmentation. Comput Biol Med 2022; 151:106231. [PMID: 36335811 DOI: 10.1016/j.compbiomed.2022.106231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 10/05/2022] [Accepted: 10/11/2022] [Indexed: 12/27/2022]
Abstract
Automated segmentation of medical images is crucial for disease diagnosis and treatment planning. Medical image segmentation has been improved based on the convolutional neural networks (CNNs) models. Unfortunately, they are still limited by scenarios in which the segmentation objective has large variations in size, boundary, position, and shape. Moreover, current CNNs have low explainability, restricting their use in clinical decisions. In this paper, we involve substantial use of various attentions in a CNN model and present an explainable multi-module semantic guided attention based network (MSGA-Net) for explainable and highly accurate medical image segmentation, which involves considering the most significant spatial regions, boundaries, scales, and channels. Specifically, we present a multi-scale attention module (MSA) to extract the most salient features at various scales from medical images. Then, we propose a semantic region-guided attention mechanism (SRGA) including location attention (LAM), channel-wise attention (CWA), and edge attention (EA) modules to extract the most important spatial, channel-wise, boundary-related features for interested regions. Moreover, we present a sequence of fine-tuning steps with the SRGA module to gradually weight the significance of interesting regions while simultaneously reducing the noise. In this work, we experimented with three different types of medical images such as dermoscopic images (HAM10000 dataset), multi-organ CT images (CHAOS 2019 dataset), and Brain tumor MRI images (BraTS 2020 dataset). Extensive experiments on all types of medical images revealed that our proposed MSGA-Net substantially increased the overall performance of all metrics over the existing models. Moreover, displaying the attention feature maps has more explainability than state-of-the-art models.
Collapse
Affiliation(s)
- Meghana Karri
- Computer Science and Engineering Department, Indian Institute of Technology (ISM), Dhanbad, 826004, Jharkhand, India.
| | | | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of science and Technology, SUSS university, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia university, Taichung, Taiwan.
| |
Collapse
|
45
|
Wen T, Tong B, Liu Y, Pan T, Du Y, Chen Y, Zhang S. Review of research on the instance segmentation of cell images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 227:107211. [PMID: 36356384 DOI: 10.1016/j.cmpb.2022.107211] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 10/27/2022] [Accepted: 10/30/2022] [Indexed: 06/16/2023]
Abstract
The instance segmentation of cell images is the basis for conducting cell research and is of great importance for the study and diagnosis of pathologies. To analyze current situations and future developments in the field of cell image instance segmentation, this paper first systematically reviews image segmentation methods based on traditional and deep learning methods. Then, from the three aspects of cell image weak label extraction, cell image instance segmentation, and cell internal structure segmentation, deep-learning-based cell image segmentation methods are analyzed and summarized. Finally, cell image instance segmentation is summarized, and challenges and future developments are discussed.
Collapse
Affiliation(s)
- Tingxi Wen
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Binbin Tong
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yu Liu
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Ting Pan
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yu Du
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yuping Chen
- College of Engineering, Huaqiao University, Quanzhou 362021, China.
| | - Shanshan Zhang
- College of Engineering, Huaqiao University, Quanzhou 362021, China.
| |
Collapse
|
46
|
Yang L, Martin JA, Brouillette MJ, Buckwalter JA, Goetz JE. Objective evaluation of chondrocyte density & cloning after joint injury using convolutional neural networks. J Orthop Res 2022; 40:2609-2619. [PMID: 35171527 PMCID: PMC9378771 DOI: 10.1002/jor.25295] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 12/01/2021] [Accepted: 02/02/2022] [Indexed: 02/04/2023]
Abstract
Variations in chondrocyte density and organization in cartilage histology sections are associated with osteoarthritis progression. Rapid, accurate quantification of these two features can facilitate the evaluation of cartilage health and advance the understanding of their significance. The goal of this work was to adapt deep-learning-based methods to detect articular chondrocytes and chondrocyte clones from safranin-O-stained cartilage to evaluate chondrocyte cellularity and organization. The U-net and "you-only-look-once" (YOLO) models were trained and validated for identifying chondrocytes and chondrocyte clones, respectively. Validated models were then used to quantify chondrocyte and clone density in talar cartilage from Yucatan minipigs sacrificed 1 week, 3, 6, and 12 months after fixation of an intra-articular fracture of the hock joint. There was excellent/good agreement between expert researchers and the developed models in identifying chondrocytes/clones (U-net: R2 = 0.93, y = 0.90x-0.69; median F1 score: 0.87/YOLO: R2 = 0.79, y = 0.95x; median F1 score: 0.67). Average chondrocyte density increased 1 week after fracture (from 774 to 856 cells/mm2 ), decreased substantially 3 months after fracture (610 cells/mm2 ), and slowly increased 6 and 12 months after fracture (638 and 683 cells/mm2 , respectively). Average detected clone density 3, 6, and 12 months after fracture (11, 11, 9 clones/mm2 ) was higher than the 4-5 clones/mm2 detected in normal tissue or 1 week after fracture and show local increases in clone density that varied across the joint surface with time. The accurate evaluation of cartilage cellularity and organization provided by this deep learning approach will increase objectivity of cartilage injury and regeneration assessments.
Collapse
Affiliation(s)
- Linjun Yang
- Department of Orthopedics and RehabilitationUniversity of IowaIowa CityIowaUSA
- Department of Biomedical EngineeringUniversity of IowaIowa CityIowaUSA
| | - James A. Martin
- Department of Orthopedics and RehabilitationUniversity of IowaIowa CityIowaUSA
- Department of Biomedical EngineeringUniversity of IowaIowa CityIowaUSA
| | - Marc J. Brouillette
- Department of Orthopedics and RehabilitationUniversity of IowaIowa CityIowaUSA
| | | | - Jessica E. Goetz
- Department of Orthopedics and RehabilitationUniversity of IowaIowa CityIowaUSA
- Department of Biomedical EngineeringUniversity of IowaIowa CityIowaUSA
| |
Collapse
|
47
|
Wu H, Souedet N, Jan C, Clouchoux C, Delzescaux T. A general deep learning framework for neuron instance segmentation based on Efficient UNet and morphological post-processing. Comput Biol Med 2022; 150:106180. [PMID: 36244305 DOI: 10.1016/j.compbiomed.2022.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/21/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
Recent studies have demonstrated the superiority of deep learning in medical image analysis, especially in cell instance segmentation, a fundamental step for many biological studies. However, the excellent performance of the neural networks requires training on large, unbiased dataset and annotations, which is labor-intensive and expertise-demanding. This paper presents an end-to-end framework to automatically detect and segment NeuN stained neuronal cells on histological images using only point annotations. Unlike traditional nuclei segmentation with point annotation, we propose using point annotation and binary segmentation to synthesize pixel-level annotations. The synthetic masks are used as the ground truth to train the neural network, a U-Net-like architecture with a state-of-the-art network, EfficientNet, as the encoder. Validation results show the superiority of our model compared to other recent methods. In addition, we investigated multiple post-processing schemes and proposed an original strategy to convert the probability map into segmented instances using ultimate erosion and dynamic reconstruction. This approach is easy to configure and outperforms other classical post-processing techniques. This work aims to develop a robust and efficient framework for analyzing neurons using optical microscopic data, which can be used in preclinical biological studies and, more specifically, in the context of neurodegenerative diseases. Code is available at: https://github.com/MIRCen/NeuronInstanceSeg.
Collapse
Affiliation(s)
- Huaqian Wu
- CEA-CNRS-UMR 9199, MIRCen, Fontenay-aux-Roses, France
| | | | - Caroline Jan
- CEA-CNRS-UMR 9199, MIRCen, Fontenay-aux-Roses, France
| | | | | |
Collapse
|
48
|
Boutillon A, Borotikar B, Burdin V, Conze PH. Multi-structure bone segmentation in pediatric MR images with combined regularization from shape priors and adversarial network. Artif Intell Med 2022; 132:102364. [DOI: 10.1016/j.artmed.2022.102364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 05/13/2022] [Accepted: 07/10/2022] [Indexed: 11/02/2022]
|
49
|
Syed AH, Khan T. Evolution of research trends in artificial intelligence for breast cancer diagnosis and prognosis over the past two decades: A bibliometric analysis. Front Oncol 2022; 12:854927. [PMID: 36267967 PMCID: PMC9578338 DOI: 10.3389/fonc.2022.854927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023] Open
Abstract
Objective In recent years, among the available tools, the concurrent application of Artificial Intelligence (AI) has improved the diagnostic performance of breast cancer screening. In this context, the present study intends to provide a comprehensive overview of the evolution of AI for breast cancer diagnosis and prognosis research using bibliometric analysis. Methodology Therefore, in the present study, relevant peer-reviewed research articles published from 2000 to 2021 were downloaded from the Scopus and Web of Science (WOS) databases and later quantitatively analyzed and visualized using Bibliometrix (R package). Finally, open challenges areas were identified for future research work. Results The present study revealed that the number of literature studies published in AI for breast cancer detection and survival prediction has increased from 12 to 546 between the years 2000 to 2021. The United States of America (USA), the Republic of China, and India are the most productive publication-wise in this field. Furthermore, the USA leads in terms of the total citations; however, hungry and Holland take the lead positions in average citations per year. Wang J is the most productive author, and Zhan J is the most relevant author in this field. Stanford University in the USA is the most relevant affiliation by the number of published articles. The top 10 most relevant sources are Q1 journals with PLOS ONE and computer in Biology and Medicine are the leading journals in this field. The most trending topics related to our study, transfer learning and deep learning, were identified. Conclusion The present findings provide insight and research directions for policymakers and academic researchers for future collaboration and research in AI for breast cancer patients.
Collapse
Affiliation(s)
- Asif Hassan Syed
- Department of Computer Science, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| | - Tabrej Khan
- Department of Information Systems, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
50
|
He W, Liu T, Han Y, Ming W, Du J, Liu Y, Yang Y, Wang L, Jiang Z, Wang Y, Yuan J, Cao C. A review: The detection of cancer cells in histopathology based on machine vision. Comput Biol Med 2022; 146:105636. [PMID: 35751182 DOI: 10.1016/j.compbiomed.2022.105636] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 04/04/2022] [Accepted: 04/28/2022] [Indexed: 12/24/2022]
Abstract
Machine vision is being employed in defect detection, size measurement, pattern recognition, image fusion, target tracking and 3D reconstruction. Traditional cancer detection methods are dominated by manual detection, which wastes time and manpower, and heavily relies on the pathologists' skill and work experience. Therefore, these manual detection approaches are not convenient for the inheritance of domain knowledge, and are not suitable for the rapid development of medical care in the future. The emergence of machine vision can iteratively update and learn the domain knowledge of cancer cell pathology detection to achieve automated, high-precision, and consistent detection. Consequently, this paper reviews the use of machine vision to detect cancer cells in histopathology images, as well as the benefits and drawbacks of various detection approaches. First, we review the application of image preprocessing and image segmentation in histopathology for the detection of cancer cells, and compare the benefits and drawbacks of different algorithms. Secondly, for the characteristics of histopathological cancer cell images, the research progress of shape, color and texture features and other methods is mainly reviewed. Furthermore, for the classification methods of histopathological cancer cell images, the benefits and drawbacks of traditional machine vision approaches and deep learning methods are compared and analyzed. Finally, the above research is discussed and forecasted, with the expected future development tendency serving as a guide for future research.
Collapse
Affiliation(s)
- Wenbin He
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Ting Liu
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongjie Han
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Wuyi Ming
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China.
| | - Jinguang Du
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yinxia Liu
- Laboratory Medicine of Dongguan Kanghua Hospital, Dongguan, 523808, China
| | - Yuan Yang
- Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, 510120, China.
| | - Leijie Wang
- School of Mechanical Engineering, Dongguan University of Technology Dongguan, 523808, China
| | - Zhiwen Jiang
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongqiang Wang
- Zhengzhou Coal Mining Machinery Group Co., Ltd, Zhengzhou, 450016, China
| | - Jie Yuan
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Chen Cao
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China
| |
Collapse
|