51
|
Paré A, Charbonnier B, Veziers J, Vignes C, Dutilleul M, De Pinieux G, Laure B, Bossard A, Saucet-Zerbib A, Touzot-Jourde G, Weiss P, Corre P, Gauthier O, Marchat D. Standardized and axially vascularized calcium phosphate-based implants for segmental mandibular defects: A promising proof of concept. Acta Biomater 2022; 154:626-640. [PMID: 36210043 DOI: 10.1016/j.actbio.2022.09.071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 09/09/2022] [Accepted: 09/28/2022] [Indexed: 12/14/2022]
Abstract
The reconstruction of massive segmental mandibular bone defects (SMDs) remains challenging even today; the current gold standard in human clinics being vascularized bone transplantation (VBT). As alternative to this onerous approach, bone tissue engineering strategies have been widely investigated. However, they displayed limited clinical success, particularly in failing to address the essential problem of quick vascularization of the implant. Although routinely used in clinics, the insertion of intrinsic vascularization in bioengineered constructs for the rapid formation of a feeding angiosome remains uncommon. In a clinically relevant model (sheep), a custom calcium phosphate-based bioceramic soaked with autologous bone marrow and perfused by an arteriovenous loop was tested to regenerate a massive SMD and was compared to VBT (clinical standard). Animals did not support well the VBT treatment, and the study was aborted 2 weeks after surgery due to ethical and animal welfare considerations. SMD regeneration was successful with the custom vascularized bone construct. Implants were well osseointegrated and vascularized after only 3 months of implantation and totally entrapped in lamellar bone after 12 months; a healthy yellow bone marrow filled the remaining space. STATEMENT OF SIGNIFICANCE: Regenerative medicine struggles with the generation of large functional bone volume. Among them segmental mandibular defects are particularly challenging to restore. The standard of care, based on bone free flaps, still displays ethical and technical drawbacks (e.g., donor site morbidity). Modern engineering technologies (e.g., 3D printing, digital chain) were combined to relevant surgical techniques to provide a pre-clinical proof of concept, investigating for the benefits of such a strategy in bone-related regenerative field. Results proved that a synthetic-biologics-free approach is able to regenerate a critical size segmental mandibular defect of 15 cm3 in a relevant preclinical model, mimicking real life scenarii of segmental mandibular defect, with a full physiological regeneration of the defect after 12 months.
Collapse
Affiliation(s)
- Arnaud Paré
- INSERM, U 1229, Laboratory of Regenerative Medicine and Skeleton, RMeS, Nantes Université, 1 Place Alexis Ricordeau, Nantes 44042, France; Department of Maxillofacial and Plastic surgery, Burn Unit, University Hospital of Tours, Trousseau Hospital, Avenue de la République, Chambray lès Tours 37170, France
| | - Baptiste Charbonnier
- INSERM, U 1229, Laboratory of Regenerative Medicine and Skeleton, RMeS, Nantes Université, 1 Place Alexis Ricordeau, Nantes 44042, France; Mines Saint-Étienne, Univ Jean Monnet, INSERM, U 1059 Sainbiose, 42023, Saint-Étienne, France
| | - Joëlle Veziers
- INSERM, U 1229, Laboratory of Regenerative Medicine and Skeleton, RMeS, Nantes Université, 1 Place Alexis Ricordeau, Nantes 44042, France
| | - Caroline Vignes
- INSERM, U 1229, Laboratory of Regenerative Medicine and Skeleton, RMeS, Nantes Université, 1 Place Alexis Ricordeau, Nantes 44042, France
| | - Maeva Dutilleul
- INSERM, U 1229, Laboratory of Regenerative Medicine and Skeleton, RMeS, Nantes Université, 1 Place Alexis Ricordeau, Nantes 44042, France
| | - Gonzague De Pinieux
- Department of Pathology, University Hospital of Tours, Trousseau Hospital, Avenue de la République, Chambray lès Tours 37170, France
| | - Boris Laure
- Department of Maxillofacial and Plastic surgery, Burn Unit, University Hospital of Tours, Trousseau Hospital, Avenue de la République, Chambray lès Tours 37170, France
| | - Adeline Bossard
- ONIRIS Nantes-Atlantic College of Veterinary Medicine, Research Center of Preclinical Invesitagtion (CRIP), Site de la Chantrerie, 101 route de Gachet, Nantes 44307, France
| | - Annaëlle Saucet-Zerbib
- ONIRIS Nantes-Atlantic College of Veterinary Medicine, Research Center of Preclinical Invesitagtion (CRIP), Site de la Chantrerie, 101 route de Gachet, Nantes 44307, France
| | - Gwenola Touzot-Jourde
- INSERM, U 1229, Laboratory of Regenerative Medicine and Skeleton, RMeS, Nantes Université, 1 Place Alexis Ricordeau, Nantes 44042, France; ONIRIS Nantes-Atlantic College of Veterinary Medicine, Research Center of Preclinical Invesitagtion (CRIP), Site de la Chantrerie, 101 route de Gachet, Nantes 44307, France
| | - Pierre Weiss
- INSERM, U 1229, Laboratory of Regenerative Medicine and Skeleton, RMeS, Nantes Université, 1 Place Alexis Ricordeau, Nantes 44042, France
| | - Pierre Corre
- INSERM, U 1229, Laboratory of Regenerative Medicine and Skeleton, RMeS, Nantes Université, 1 Place Alexis Ricordeau, Nantes 44042, France; Clinique de Stomatologie et Chirurgie Maxillo-Faciale, Nantes University Hospital, 1 Place Alexis Ricordeau, Nantes 44042, France
| | - Olivier Gauthier
- INSERM, U 1229, Laboratory of Regenerative Medicine and Skeleton, RMeS, Nantes Université, 1 Place Alexis Ricordeau, Nantes 44042, France; ONIRIS Nantes-Atlantic College of Veterinary Medicine, Research Center of Preclinical Invesitagtion (CRIP), Site de la Chantrerie, 101 route de Gachet, Nantes 44307, France
| | - David Marchat
- Mines Saint-Étienne, Univ Jean Monnet, INSERM, U 1059 Sainbiose, 42023, Saint-Étienne, France.
| |
Collapse
|
52
|
Deep learning model to predict Epstein-Barr virus associated gastric cancer in histology. Sci Rep 2022; 12:18466. [PMID: 36323712 PMCID: PMC9630260 DOI: 10.1038/s41598-022-22731-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 10/18/2022] [Indexed: 11/20/2022] Open
Abstract
The detection of Epstein-Barr virus (EBV) in gastric cancer patients is crucial for clinical decision making, as it is related with specific treatment responses and prognoses. Despite its importance, the limited medical resources preclude universal EBV testing. Herein, we propose a deep learning-based EBV prediction method from H&E-stained whole-slide images (WSI). Our model was developed using 319 H&E stained WSI (26 EBV positive; TCGA dataset) from the Cancer Genome Atlas, and 108 WSI (8 EBV positive; ISH dataset) from an independent institution. Our deep learning model, EBVNet consists of two sequential components: a tumor classifier and an EBV classifier. We visualized the learned representation by the classifiers using UMAP. We externally validated the model using 60 additional WSI (7 being EBV positive; HGH dataset). We compared the model's performance with those of four pathologists. EBVNet achieved an AUPRC of 0.65, whereas the four pathologists yielded a mean AUPRC of 0.41. Moreover, EBVNet achieved an negative predictive value, sensitivity, specificity, precision, and F1-score of 0.98, 0.86, 0.92, 0.60, and 0.71, respectively. Our proposed model is expected to contribute to prescreen patients for confirmatory testing, potentially to save test-related cost and labor.
Collapse
|
53
|
Dabass M, Vashisth S, Vig R. MTU: A multi-tasking U-net with hybrid convolutional learning and attention modules for cancer classification and gland Segmentation in Colon Histopathological Images. Comput Biol Med 2022; 150:106095. [PMID: 36179516 DOI: 10.1016/j.compbiomed.2022.106095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/31/2022] [Accepted: 09/10/2022] [Indexed: 11/17/2022]
Abstract
A clinically comparable multi-tasking computerized deep U-Net-based model is demonstrated in this paper. It intends to offer clinical gland morphometric information and cancer grade classification to be provided as referential opinions for pathologists in order to abate human errors. It embraces enhanced feature learning capability that aids in extraction of potent multi-scale features; efficacious semantic gap recovery during feature concatenation; and successful interception of resolution-degradation and vanishing gradient problems while performing moderate computations. It is proposed by integrating three unique novel structural components namely Hybrid Convolutional Learning Units in the encoder and decoder, Attention Learning Units in skip connection, and Multi-Scalar Dilated Transitional Unit as the transitional layer in the traditional U-Net architecture. These units are composed of the amalgamated phenomenon of multi-level convolutional learning through conventional, atrous, residual, depth-wise, and point-wise convolutions which are further incorporated with target-specific attention learning and enlarged effectual receptive field size. Also, pre-processing techniques of patch-sampling, augmentation (color and morphological), stain-normalization, etc. are employed to burgeon its generalizability. To build network invariance towards digital variability, exhaustive experiments are conducted using three public datasets (Colorectal Adenocarcinoma Gland (CRAG), Gland Segmentation (GlaS) challenge, and Lung Colon-25000 (LC-25K) dataset)) and then its robustness is verified using an in-house private dataset of Hospital Colon (HosC). For the cancer classification, the proposed model achieved results of Accuracy (CRAG(95%), GlaS(97.5%), LC-25K(99.97%), HosC(99.45%)), Precision (CRAG(0.9678), GlaS(0.9768), LC-25K(1), HosC(1)), F1-score (CRAG(0.968), GlaS(0.977), LC 25K(0.9997), HosC(0.9965)), and Recall (CRAG(0.9677), GlaS(0.9767), LC-25K(0.9994), HosC(0.9931)). For the gland detection and segmentation, the proposed model achieved competitive results of F1-score (CRAG(0.924), GlaS(Test A(0.949), Test B(0.918)), LC-25K(0.916), HosC(0.959)); Object-Dice Index (CRAG(0.959), GlaS(Test A(0.956), Test B(0.909)), LC-25K(0.929), HosC(0.922)), and Object-Hausdorff Distance (CRAG(90.47), GlaS(Test A(23.17), Test B(71.53)), LC-25K(96.28), HosC(85.45)). In addition, the activation mappings for testing the interpretability of the classification decision-making process are reported by utilizing techniques of Local Interpretable Model-Agnostic Explanations, Occlusion Sensitivity, and Gradient-Weighted Class Activation Mappings. This is done to provide further evidence about the model's self-learning capability of the comparable patterns considered relevant by pathologists without any pre-requisite for annotations. These activation mapping visualization outcomes are evaluated by proficient pathologists, and they delivered these images with a class-path validation score of (CRAG(9.31), GlaS(9.25), LC-25K(9.05), and HosC(9.85)). Furthermore, the seg-path validation score of (GlaS (Test A(9.40), Test B(9.25)), CRAG(9.27), LC-25K(9.01), HosC(9.19)) given by multiple pathologists is included for the final segmented outcomes to substantiate the clinical relevance and suitability for facilitation at the clinical level. The proposed model will aid pathologists to formulate an accurate diagnosis by providing a referential opinion during the morphology assessment of histopathology images. It will reduce unintentional human error in cancer diagnosis and consequently will enhance patient survival rate.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, 122017, India.
| | - Sharda Vashisth
- EECE Deptt, The NorthCap University, Gurugram, 122017, India
| | - Rekha Vig
- EECE Deptt, The NorthCap University, Gurugram, 122017, India
| |
Collapse
|
54
|
Lou J, Xu J, Zhang Y, Sun Y, Fang A, Liu J, Mur LAJ, Ji B. PPsNet: An improved deep learning model for microsatellite instability high prediction in colorectal cancer from whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107095. [PMID: 36057226 DOI: 10.1016/j.cmpb.2022.107095] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 08/18/2022] [Accepted: 08/26/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Recent studies have shown that colorectal cancer (CRC) patients with microsatellite instability high (MSI-H) are more likely to benefit from immunotherapy. However, current MSI testing methods are not available for all patients due to the lack of available equipment and trained personnel, as well as the high cost of the assay. Here, we developed an improved deep learning model to predict MSI-H in CRC from whole slide images (WSIs). METHODS We established the MSI-H prediction model based on two stages: tumor detection and MSI classification. Previous works applied fine-tuning strategy directly for tumor detection, but ignoring the challenge of vanishing gradient due to the large number of convolutional layers. We added auxiliary classifiers to intermediate layers of pre-trained models to help propagate gradients back through in an effective manner. To predict MSI status, we constructed a pair-wise learning model with a synergic network, named parameter partial sharing network (PPsNet), where partial parameters are shared among two deep convolutional neural networks (DCNNs). The proposed PPsNet contained fewer parameters and reduced the problem of intra-class variation and inter-class similarity. We validated the proposed model on a holdout test set and two external test sets. RESULTS 144 H&E-stained WSIs from 144 CRC patients (81 cases with MSI-H and 63 cases with MSI-L/MSS) were collected retrospectively from three hospitals. The experimental results indicate that deep supervision based fine-tuning almost outperforms training from scratch and utilizing fine-tuning directly. The proposed PPsNet always achieves better accuracy and area under the receiver operating characteristic curve (AUC) than other solutions with four different neural network architectures on validation. The proposed method finally achieves obvious improvements than other state-of-the-art methods on the validation dataset with an accuracy of 87.28% and AUC of 94.29%. CONCLUSIONS The proposed method can obviously increase model performance and our model yields better performance than other methods. Additionally, this work also demonstrates the feasibility of MSI-H prediction using digital pathology images based on deep learning in the Asian population. It is hoped that this model could serve as an auxiliary tool to identify CRC patients with MSI-H more time-saving and efficiently.
Collapse
Affiliation(s)
- Jingjiao Lou
- School of Control Science and Engineering, Shandong University, 17923 Jingshi Road, Jinan, Shandong 250061, PR China
| | - Jiawen Xu
- Department of Pathology, Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, Shandong 250021, PR China
| | - Yuyan Zhang
- School of Control Science and Engineering, Shandong University, 17923 Jingshi Road, Jinan, Shandong 250061, PR China
| | - Yuhong Sun
- Department of Pathology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong 250117, PR China
| | - Aiju Fang
- Department of Pathology, Shandong Provincial Third Hospital, Shandong University, Jinan, Shandong 250132, PR China
| | - Jixuan Liu
- Department of Pathology, Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, Shandong 250021, PR China
| | - Luis A J Mur
- Institute of Biological, Environmental and Rural Sciences (IBERS), Aberystwyth University, Aberystwyth, Wales SY23 3DZ, UK
| | - Bing Ji
- School of Control Science and Engineering, Shandong University, 17923 Jingshi Road, Jinan, Shandong 250061, PR China.
| |
Collapse
|
55
|
A Hybrid Fusion Method Combining Spatial Image Filtering with Parallel Channel Network for Retinal Vessel Segmentation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07311-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
56
|
Hameed Z, Garcia-Zapirain B, Aguirre JJ, Isaza-Ruget MA. Multiclass classification of breast cancer histopathology images using multilevel features of deep convolutional neural network. Sci Rep 2022; 12:15600. [PMID: 36114214 PMCID: PMC9649689 DOI: 10.1038/s41598-022-19278-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 08/26/2022] [Indexed: 12/03/2022] Open
Abstract
Breast cancer is a common malignancy and a leading cause of cancer-related deaths in women worldwide. Its early diagnosis can significantly reduce the morbidity and mortality rates in women. To this end, histopathological diagnosis is usually followed as the gold standard approach. However, this process is tedious, labor-intensive, and may be subject to inter-reader variability. Accordingly, an automatic diagnostic system can assist to improve the quality of diagnosis. This paper presents a deep learning approach to automatically classify hematoxylin-eosin-stained breast cancer microscopy images into normal tissue, benign lesion, in situ carcinoma, and invasive carcinoma using our collected dataset. Our proposed model exploited six intermediate layers of the Xception (Extreme Inception) network to retrieve robust and abstract features from input images. First, we optimized the proposed model on the original (unnormalized) dataset using 5-fold cross-validation. Then, we investigated its performance on four normalized datasets resulting from Reinhard, Ruifrok, Macenko, and Vahadane stain normalization. For original images, our proposed framework yielded an accuracy of 98% along with a kappa score of 0.969. Also, it achieved an average AUC-ROC score of 0.998 as well as a mean AUC-PR value of 0.995. Specifically, for in situ carcinoma and invasive carcinoma, it offered sensitivity of 96% and 99%, respectively. For normalized images, the proposed architecture performed better for Makenko normalization compared to the other three techniques. In this case, the proposed model achieved an accuracy of 97.79% together with a kappa score of 0.965. Also, it attained an average AUC-ROC score of 0.997 and a mean AUC-PR value of 0.991. Especially, for in situ carcinoma and invasive carcinoma, it offered sensitivity of 96% and 99%, respectively. These results demonstrate that our proposed model outperformed the baseline AlexNet as well as state-of-the-art VGG16, VGG19, Inception-v3, and Xception models with their default settings. Furthermore, it can be inferred that although stain normalization techniques offered competitive performance, they could not surpass the results of the original dataset.
Collapse
|
57
|
Khvostikov AV, Krylov AS, Mikhailov IA, Malkov PG. Visualization of Whole Slide Histological Images with Automatic Tissue Type Recognition. PATTERN RECOGNITION AND IMAGE ANALYSIS 2022. [DOI: 10.1134/s1054661822030208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
58
|
Wong ANN, He Z, Leung KL, To CCK, Wong CY, Wong SCC, Yoo JS, Chan CKR, Chan AZ, Lacambra MD, Yeung MHY. Current Developments of Artificial Intelligence in Digital Pathology and Its Future Clinical Applications in Gastrointestinal Cancers. Cancers (Basel) 2022; 14:3780. [PMID: 35954443 PMCID: PMC9367360 DOI: 10.3390/cancers14153780] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/27/2022] [Accepted: 08/01/2022] [Indexed: 02/05/2023] Open
Abstract
The implementation of DP will revolutionize current practice by providing pathologists with additional tools and algorithms to improve workflow. Furthermore, DP will open up opportunities for development of AI-based tools for more precise and reproducible diagnosis through computational pathology. One of the key features of AI is its capability to generate perceptions and recognize patterns beyond the human senses. Thus, the incorporation of AI into DP can reveal additional morphological features and information. At the current rate of AI development and adoption of DP, the interest in computational pathology is expected to rise in tandem. There have already been promising developments related to AI-based solutions in prostate cancer detection; however, in the GI tract, development of more sophisticated algorithms is required to facilitate histological assessment of GI specimens for early and accurate diagnosis. In this review, we aim to provide an overview of the current histological practices in AP laboratories with respect to challenges faced in image preprocessing, present the existing AI-based algorithms, discuss their limitations and present clinical insight with respect to the application of AI in early detection and diagnosis of GI cancer.
Collapse
Affiliation(s)
- Alex Ngai Nick Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Zebang He
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Ka Long Leung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Curtis Chun Kit To
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Chun Yin Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Sze Chuen Cesar Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Jung Sun Yoo
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Cheong Kin Ronald Chan
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Angela Zaneta Chan
- Department of Anatomical and Cellular Pathology, Prince of Wales Hospital, Shatin, Hong Kong SAR, China;
| | - Maribel D. Lacambra
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Martin Ho Yin Yeung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| |
Collapse
|
59
|
Meirelles AL, Kurc T, Saltz J, Teodoro G. Effective active learning in digital pathology: A case study in tumor infiltrating lymphocytes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106828. [PMID: 35500506 DOI: 10.1016/j.cmpb.2022.106828] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 04/09/2022] [Accepted: 04/19/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning methods have demonstrated remarkable performance in pathology image analysis, but they require a large amount of annotated training data from expert pathologists. The aim of this study is to minimize the data annotation need in these analyses. METHODS Active learning (AL) is an iterative approach to training deep learning models. It was used in our context with a Tumor Infiltrating Lymphocytes (TIL) classification task to minimize annotation. State-of-the-art AL methods were evaluated with the TIL application and we have proposed and evaluated a more efficient and effective AL acquisition method. The proposed method uses data grouping based on imaging features and model prediction uncertainty to select meaningful training samples (image patches). RESULTS An experimental evaluation with a collection of cancer tissue images shows that: (i) Our approach reduces the number of patches required to attain a given AUC as compared to other approaches, and (ii) our optimization (subpooling) leads to AL execution time improvement of about 2.12×. CONCLUSIONS This strategy enabled TIL based deep learning analyses using smaller annotation demand. We expect this approach may be used to build other analyses in digital pathology with fewer training samples.
Collapse
Affiliation(s)
- André Ls Meirelles
- Department of Computer Science, University of Brasília, Brasília, 70910-900, Brazil
| | - Tahsin Kurc
- Biomedical Informatics Department, Stony Brook University, Stony Brook, 11794-8322, USA
| | - Joel Saltz
- Biomedical Informatics Department, Stony Brook University, Stony Brook, 11794-8322, USA
| | - George Teodoro
- Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, 31270-901, Brazil.
| |
Collapse
|
60
|
Yu H, Li X, Feng Y, Han S. Multiple attentional path aggregation network for marine object detection. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03622-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
61
|
Zhao Y, Fu C, Xu S, Cao L, Ma HF. LFANet: Lightweight feature attention network for abnormal cell segmentation in cervical cytology images. Comput Biol Med 2022; 145:105500. [PMID: 35421793 DOI: 10.1016/j.compbiomed.2022.105500] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 03/16/2022] [Accepted: 04/04/2022] [Indexed: 11/19/2022]
Abstract
With the widely applied computer-aided diagnosis techniques in cervical cancer screening, cell segmentation has become a necessary step to determine the progression of cervical cancer. Traditional manual methods alleviate the dilemma caused by the shortage of medical resources to a certain extent. Unfortunately, with their low segmentation accuracy for abnormal cells, the complex process cannot realize an automatic diagnosis. In addition, various methods on deep learning can automatically extract image features with high accuracy and small error, making artificial intelligence increasingly popular in computer-aided diagnosis. However, they are not suitable for clinical practice because those complicated models would result in more redundant parameters from networks. To address the above problems, a lightweight feature attention network (LFANet), extracting differentially abundant feature information of objects with various resolutions, is proposed in this study. The model can accurately segment both the nucleus and cytoplasm regions in cervical images. Specifically, a lightweight feature extraction module is designed as an encoder to extract abundant features of input images, combining with depth-wise separable convolution, residual connection and attention mechanism. Besides, the feature layer attention module is added to precisely recover pixel location, which employs the global high-level information as a guide for the low-level features, capturing dependencies of channel features. Finally, our LFANet model is evaluated on all four independent datasets. The experimental results demonstrate that compared with other advanced methods, our proposed network achieves state-of-the-art performance with a low computational complexity.
Collapse
Affiliation(s)
- Yanli Zhao
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China; School of Electrical Information Engineering, Ningxia Institute of Technology, Shizuishan, 753000, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110819, China; Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, China.
| | - Sen Xu
- General Hospital of Northern Theatre Command, Shenyang, 110016, China
| | - Lin Cao
- School of Information and Communication Engineering, Beijing Information Science and Technology University, Beijing, 100101, China
| | - Hong-Feng Ma
- Dopamine Group Ltd., Auckland, 1542, New Zealand
| |
Collapse
|
62
|
Thakoor KA, Yao J, Bordbar D, Moussa O, Lin W, Sajda P, Chen RWS. A multimodal deep learning system to distinguish late stages of AMD and to compare expert vs. AI ocular biomarkers. Sci Rep 2022; 12:2585. [PMID: 35173191 PMCID: PMC8850456 DOI: 10.1038/s41598-022-06273-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 01/24/2022] [Indexed: 01/08/2023] Open
Abstract
Within the next 1.5 decades, 1 in 7 U.S. adults is anticipated to suffer from age-related macular degeneration (AMD), a degenerative retinal disease which leads to blindness if untreated. Optical coherence tomography angiography (OCTA) has become a prime technique for AMD diagnosis, specifically for late-stage neovascular (NV) AMD. Such technologies generate massive amounts of data, challenging to parse by experts alone, transforming artificial intelligence into a valuable partner. We describe a deep learning (DL) approach which achieves multi-class detection of non-AMD vs. non-neovascular (NNV) AMD vs. NV AMD from a combination of OCTA, OCT structure, 2D b-scan flow images, and high definition (HD) 5-line b-scan cubes; DL also detects ocular biomarkers indicative of AMD risk. Multimodal data were used as input to 2D-3D Convolutional Neural Networks (CNNs). Both for CNNs and experts, choroidal neovascularization and geographic atrophy were found to be important biomarkers for AMD. CNNs predict biomarkers with accuracy up to 90.2% (positive-predictive-value up to 75.8%). Just as experts rely on multimodal data to diagnose AMD, CNNs also performed best when trained on multiple inputs combined. Detection of AMD and its biomarkers from OCTA data via CNNs has tremendous potential to expedite screening of early and late-stage AMD patients.
Collapse
Affiliation(s)
- Kaveri A Thakoor
- Department of Biomedical Engineering, Columbia University, New York, 10027, USA.
| | - Jiaang Yao
- Department of Electrical Engineering, Columbia University, New York, 10027, USA
| | - Darius Bordbar
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, 10032, USA
| | - Omar Moussa
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, 10032, USA
| | - Weijie Lin
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, 10032, USA
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, 10027, USA
- Department of Electrical Engineering, Columbia University, New York, 10027, USA
- Department of Radiology (Physics), Columbia University, New York, 10027, USA
| | - Royce W S Chen
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, 10032, USA
| |
Collapse
|
63
|
Hoque MZ, Keskinarkaus A, Nyberg P, Mattila T, Seppänen T. Whole slide image registration via multi-stained feature matching. Comput Biol Med 2022; 144:105301. [DOI: 10.1016/j.compbiomed.2022.105301] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/18/2022] [Accepted: 01/24/2022] [Indexed: 11/15/2022]
|
64
|
Zhang L, Li M, Wu Y, Hao F, Wang C, Han W, Niu D, Zheng W. Classification of renal biopsy direct immunofluorescence image using multiple attention convolutional neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106532. [PMID: 34852936 DOI: 10.1016/j.cmpb.2021.106532] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 10/25/2021] [Accepted: 11/10/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Direct immunofluorescence (DIF) is an important medical evaluation tool for renal pathology. In the DIF images, the deposition appearances and locations of immunoglobulin on glomeruli involve immunological characteristics of glomerulonephritis and thus can be used to aid in the identification of glomerulonephritis disease. Manual classification to such deposition patterns is time consuming and may lead to significant inter and intra operator variances. We wanted to automate the identification and fusion of deposition location and deposition appearance to assist physicians in achieving immunofluorescence reporting. METHODS In this paper, we propose a framework that consists of a pre-segmentation module and a classification module for automatically segmenting glomerulus object and classifying the deposition pattern of immunoglobulin on glomerulus object. For the pre-segmentation module, the glomerulus object is segmented out from the acquired DIF images using a segmentation network, which excludes other tissues and makes the classification module focus on the glomerulus. For the classification module, two branches of classifying deposition region and appearance, respectively, are formed by using multiple attentions convolutional neural network (MANet) based on the segmented images, and the classification results of the two pre-trained classification networks are fused with labels. RESULTS Experimental results show that the proposed framework achieves a high classification performance with an accuracy of 98% and 95% in terms of deposition region and appearance, respectively. The label fusion of deposition appearance and deposition classification is achieved with high accuracy based on well-trained classification. CONCLUSIONS The data show that automated and accurate patterned immunofluorescence report generation is achieved, which can effectively help improve the diagnosis of autoimmune kidney disease.
Collapse
Affiliation(s)
- Liang Zhang
- College of Data Science, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Ming Li
- College of Data Science, Taiyuan University of Technology, Taiyuan, 030024, China.
| | - Yongfei Wu
- College of Data Science, Taiyuan University of Technology, Taiyuan, 030024, China.
| | - Fang Hao
- College of Data Science, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Chen Wang
- Department of Pathology, Second Hospital of Shanxi Medical University, Taiyuan, Shanxi, China
| | - Weixia Han
- Department of Pathology, Second Hospital of Shanxi Medical University, Taiyuan, Shanxi, China
| | - Dan Niu
- Department of Pathology, Second Hospital of Shanxi Medical University, Taiyuan, Shanxi, China
| | - Wen Zheng
- College of Data Science, Taiyuan University of Technology, Taiyuan, 030024, China
| |
Collapse
|
65
|
Amin J, Sharif M, Fernandes SL, Wang SH, Saba T, Khan AR. Breast microscopic cancer segmentation and classification using unique 4-qubit-quantum model. Microsc Res Tech 2022; 85:1926-1936. [PMID: 35043505 DOI: 10.1002/jemt.24054] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 10/20/2021] [Accepted: 12/02/2021] [Indexed: 12/19/2022]
Abstract
The visual inspection of histopathological samples is the benchmark for detecting breast cancer, but a strenuous and complicated process takes a long time of the pathologist practice. Deep learning models have shown excellent outcomes in clinical diagnosis and image processing and advances in various fields, including drug development, frequency simulation, and optimization techniques. However, the resemblance of histopathologic images of breast cancer and the inclusion of stable and infected tissues in different areas make detecting and classifying tumors on entire slide images more difficult. In breast cancer, a correct diagnosis is needed for complete care in a limited amount of time. An effective detection can relieve the pathologist's workload and mitigate diagnostic subjectivity. Therefore, this research work investigates improved the pre-trained xception and deeplabv3+ design semantic model. The model has been trained on input images with ground masks on the tuned parameters that significantly improve the segmentation of ultrasound breast images into respective classes, that is, benign/malignant. The segmentation model delivered an accuracy of greater than 99% to prove the model's effectiveness. The segmented images and histopathological breast images are transferred to the 4-qubit-quantum circuit with six-layered architecture to detect breast malignancy. The proposed framework achieved remarkable performance as contrasted to currently published methodologies. HIGHLIGHTS: This research proposed hybrid semantic model using pre-trained xception and deeplabv3 for breast microscopic cancer classification in to benign and malignant classes at accuracy of 95% accuracy, 99% accuracy for detection of breast malignancy.
Collapse
Affiliation(s)
- Javaria Amin
- Department of Computer Science, University of Wah, Quaid Avenue, Wah Cantt, Pakistan, 4740, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Steven Lawrence Fernandes
- Department of Computer Science, Design and Journalism, Creighton University, Omaha, Nebraska, 68178, USA
| | - Shui-Hua Wang
- School of Mathematics and Actuarial Science, University of Leicester, Leicester, UK
| | - Tanzila Saba
- Artificial Intelligence & Data Lab (AIDA) CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Lab (AIDA) CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| |
Collapse
|
66
|
McGenity C, Wright A, Treanor D. AIM in Surgical Pathology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
67
|
den Boer RB, de Jongh C, Huijbers WTE, Jaspers TJM, Pluim JPW, van Hillegersberg R, Van Eijnatten M, Ruurda JP. Computer-aided anatomy recognition in intrathoracic and -abdominal surgery: a systematic review. Surg Endosc 2022; 36:8737-8752. [PMID: 35927354 PMCID: PMC9652273 DOI: 10.1007/s00464-022-09421-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 06/24/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND Minimally invasive surgery is complex and associated with substantial learning curves. Computer-aided anatomy recognition, such as artificial intelligence-based algorithms, may improve anatomical orientation, prevent tissue injury, and improve learning curves. The study objective was to provide a comprehensive overview of current literature on the accuracy of anatomy recognition algorithms in intrathoracic and -abdominal surgery. METHODS This systematic review is reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. Pubmed, Embase, and IEEE Xplore were searched for original studies up until January 2022 on computer-aided anatomy recognition, without requiring intraoperative imaging or calibration equipment. Extracted features included surgical procedure, study population and design, algorithm type, pre-training methods, pre- and post-processing methods, data augmentation, anatomy annotation, training data, testing data, model validation strategy, goal of the algorithm, target anatomical structure, accuracy, and inference time. RESULTS After full-text screening, 23 out of 7124 articles were included. Included studies showed a wide diversity, with six possible recognition tasks in 15 different surgical procedures, and 14 different accuracy measures used. Risk of bias in the included studies was high, especially regarding patient selection and annotation of the reference standard. Dice and intersection over union (IoU) scores of the algorithms ranged from 0.50 to 0.98 and from 74 to 98%, respectively, for various anatomy recognition tasks. High-accuracy algorithms were typically trained using larger datasets annotated by expert surgeons and focused on less-complex anatomy. Some of the high-accuracy algorithms were developed using pre-training and data augmentation. CONCLUSIONS The accuracy of included anatomy recognition algorithms varied substantially, ranging from moderate to good. Solid comparison between algorithms was complicated by the wide variety of applied methodology, target anatomical structures, and reported accuracy measures. Computer-aided intraoperative anatomy recognition is an upcoming research discipline, but still at its infancy. Larger datasets and methodological guidelines are required to improve accuracy and clinical applicability in future research. TRIAL REGISTRATION PROSPERO registration number: CRD42021264226.
Collapse
Affiliation(s)
- R. B. den Boer
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - C. de Jongh
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - W. T. E. Huijbers
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - T. J. M. Jaspers
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| | - J. P. W. Pluim
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| | - R. van Hillegersberg
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - M. Van Eijnatten
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| | - J. P. Ruurda
- Department of Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| |
Collapse
|
68
|
Stain normalization in digital pathology: Clinical multi-center evaluation of image quality. J Pathol Inform 2022; 13:100145. [PMID: 36268060 PMCID: PMC9577129 DOI: 10.1016/j.jpi.2022.100145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 09/14/2022] [Accepted: 09/22/2022] [Indexed: 11/20/2022] Open
Abstract
In digital pathology, the final appearance of digitized images is affected by several factors, resulting in stain color and intensity variation. Stain normalization is an innovative solution to overcome stain variability. However, the validation of color normalization tools has been assessed only from a quantitative perspective, through the computation of similarity metrics between the original and normalized images. To the best of our knowledge, no works investigate the impact of normalization on the pathologist’s evaluation. The objective of this paper is to propose a multi-tissue (i.e., breast, colon, liver, lung, and prostate) and multi-center qualitative analysis of a stain normalization tool with the involvement of pathologists with different years of experience. Two qualitative studies were carried out for this purpose: (i) a first study focused on the analysis of the perceived image quality and absence of significant image artifacts after the normalization process; (ii) a second study focused on the clinical score of the normalized image with respect to the original one. The results of the first study prove the high quality of the normalized image with a low impact artifact generation, while the second study demonstrates the superiority of the normalized image with respect to the original one in clinical practice. The normalization process can help both to reduce variability due to tissue staining procedures and facilitate the pathologist in the histological examination. The experimental results obtained in this work are encouraging and can justify the use of a stain normalization tool in clinical routine.
Collapse
|
69
|
Digital workflows for pathological assessment of rat estrous cycle stage using images of uterine horn and vaginal tissue. J Pathol Inform 2022; 13:100120. [PMID: 36268108 PMCID: PMC9577039 DOI: 10.1016/j.jpi.2022.100120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/24/2022] Open
Abstract
Assessment of the estrous cycle of mature female mammals is an important component of verifying the efficacy and safety of drug candidates. The common pathological approach of relying on expert observation has several drawbacks, including laborious work and inter-viewer variability. The recent advent of image recognition technologies using deep learning is expected to bring substantial benefits to such pathological assessments. We herein propose 2 distinct deep learning-based workflows to classify the estrous cycle stage from tissue images of the uterine horn and vagina, respectively. These constructed models were able to classify the estrous cycle stages with accuracy comparable with that of expert pathologists. Our digital workflows allow efficient pathological assessments of the estrous cycle stage in rats and are thus expected to accelerate drug research and development.
Collapse
|
70
|
Zhang C, Gu J, Zhu Y, Meng Z, Tong T, Li D, Liu Z, Du Y, Wang K, Tian J. AI in spotting high-risk characteristics of medical imaging and molecular pathology. PRECISION CLINICAL MEDICINE 2021; 4:271-286. [PMID: 35692858 PMCID: PMC8982528 DOI: 10.1093/pcmedi/pbab026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 11/26/2021] [Accepted: 11/29/2021] [Indexed: 02/07/2023] Open
Abstract
Medical imaging provides a comprehensive perspective and rich information for disease diagnosis. Combined with artificial intelligence technology, medical imaging can be further mined for detailed pathological information. Many studies have shown that the macroscopic imaging characteristics of tumors are closely related to microscopic gene, protein and molecular changes. In order to explore the function of artificial intelligence algorithms in in-depth analysis of medical imaging information, this paper reviews the articles published in recent years from three perspectives: medical imaging analysis method, clinical applications and the development of medical imaging in the direction of pathological molecular prediction. We believe that AI-aided medical imaging analysis will be extensively contributing to precise and efficient clinical decision.
Collapse
Affiliation(s)
- Chong Zhang
- Department of Big Data Management and Application, School of International Economics and Management, Beijing Technology and Business University, Beijing 100048, China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Jionghui Gu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yangyang Zhu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zheling Meng
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tong Tong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Dongyang Li
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Du
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing 100191, China
| |
Collapse
|
71
|
Chen H, Strickland AL, Castrillon DH. Histopathologic diagnosis of endometrial precancers: Updates and future directions. Semin Diagn Pathol 2021; 39:137-147. [PMID: 34920905 PMCID: PMC9035046 DOI: 10.1053/j.semdp.2021.12.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 12/06/2021] [Accepted: 12/08/2021] [Indexed: 12/31/2022]
Abstract
Early detection of endometrial cancer, especially its precancers, remains a critical and evolving issue in patient management and the quest to decrease mortality due to endometrial cancer. Due to many factors such as specimen fragmentation, the confounding influence of endogenous or exogenous hormones, and variable or overlapping histologic features, identification of bona fide endometrial precancers and their reliable discrimination from benign mimics remains one of the most challenging areas in diagnostic pathology. At the same time, the diagnosis of endometrial precancer, or the presence of suspicious but subdiagnostic features in an endometrial biopsy, can lead to long clinical follow-up with multiple patient visits and serial endometrial sampling, emphasizing the need for accurate diagnosis. Our understanding of endometrial precancers and their diagnosis has improved due to systematic investigations into morphologic criteria, the molecular genetics of endometrial cancer and their precursors, the validation of novel biomarkers and their use in panels, and more recent methods such digital image analysis. Although precancers for both endometrioid and non-endometrioid carcinomas will be reviewed, emphasis will be placed on the former. We review these advances and their relevance to the histopathologic diagnosis of endometrial precancers, and the recently updated 2020 World Health Organization (WHO) Classification of Female Genital Tumors.
Collapse
|
72
|
Jang HJ, Lee A, Kang J, Song IH, Lee SH. Prediction of genetic alterations from gastric cancer histopathology images using a fully automated deep learning approach. World J Gastroenterol 2021; 27:7687-7704. [PMID: 34908807 PMCID: PMC8641056 DOI: 10.3748/wjg.v27.i44.7687] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 09/05/2021] [Accepted: 11/13/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Studies correlating specific genetic mutations and treatment response are ongoing to establish an effective treatment strategy for gastric cancer (GC). To facilitate this research, a cost- and time-effective method to analyze the mutational status is necessary. Deep learning (DL) has been successfully applied to analyze hematoxylin and eosin (H and E)-stained tissue slide images. AIM To test the feasibility of DL-based classifiers for the frequently occurring mutations from the H and E-stained GC tissue whole slide images (WSIs). METHODS From the GC dataset of The Cancer Genome Atlas (TCGA-STAD), wild-type/mutation classifiers for CDH1, ERBB2, KRAS, PIK3CA, and TP53 genes were trained on 360 × 360-pixel patches of tissue images. RESULTS The area under the curve (AUC) for the receiver operating characteristic (ROC) curves ranged from 0.727 to 0.862 for the TCGA frozen WSIs and 0.661 to 0.858 for the TCGA formalin-fixed paraffin-embedded (FFPE) WSIs. The performance of the classifier can be improved by adding new FFPE WSI training dataset from our institute. The classifiers trained for mutation prediction in colorectal cancer completely failed to predict the mutational status in GC, indicating that DL-based mutation classifiers are incompatible between different cancers. CONCLUSION This study concluded that DL could predict genetic mutations in H and E-stained tissue slides when they are trained with appropriate tissue data.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Catholic Big Data Integration Center, Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - Ahwon Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - Jun Kang
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - In Hye Song
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - Sung Hak Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| |
Collapse
|
73
|
Pérez-Bueno F, Vega M, Sales MA, Aneiros-Fernández J, Naranjo V, Molina R, Katsaggelos AK. Blind color deconvolution, normalization, and classification of histological images using general super Gaussian priors and Bayesian inference. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106453. [PMID: 34649072 DOI: 10.1016/j.cmpb.2021.106453] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 10/01/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Color variations in digital histopathology severely impact the performance of computer-aided diagnosis systems. They are due to differences in the staining process and acquisition system, among other reasons. Blind color deconvolution techniques separate multi-stained images into single stained bands which, once normalized, can be used to eliminate these negative color variations and improve the performance of machine learning tasks. METHODS In this work, we decompose the observed RGB image in its hematoxylin and eosin components. We apply Bayesian modeling and inference based on the use of Super Gaussian sparse priors for each stain together with prior closeness to a given reference color-vector matrix. The hematoxylin and eosin components are then used for image normalization and classification of histological images. The proposed framework is tested on stain separation, image normalization, and cancer classification problems. The results are measured using the peak signal to noise ratio, normalized median intensity and the area under ROC curve on five different databases. RESULTS The obtained results show the superiority of our approach to current state-of-the-art blind color deconvolution techniques. In particular, the fidelity to the tissue improves 1,27 dB in mean PSNR. The normalized median intensity shows a good normalization quality of the proposed approach on the tested datasets. Finally, in cancer classification experiments the area under the ROC curve improves from 0.9491 to 0.9656 and from 0.9279 to 0.9541 on Camelyon-16 and Camelyon-17, respectively, when the original and processed images are used. Furthermore, these figures of merits are better than those obtained by the methods compared with. CONCLUSIONS The proposed framework for blind color deconvolution, normalization and classification of images guarantees fidelity to the tissue structure and can be used both for normalization and classification. In addition, color deconvolution enables the use of the optical density space for classification, which improves the classification performance.
Collapse
Affiliation(s)
- Fernando Pérez-Bueno
- Dpto. Ciencias de la Computación e Inteligencia Artificial, Universidad de Granada, Spain.
| | - Miguel Vega
- Dpto. de Lenguajes y Sistemas Informáticos, Universidad de Granada, Spain.
| | - María A Sales
- Anatomical Pathology Service, University Clinical Hospital of Valencia, Valencia, Spain.
| | - José Aneiros-Fernández
- Intercenter Unit of Pathological Anatomy, San Cecilio University Hospital, Granada, Spain.
| | - Valery Naranjo
- Dpto. de Comunicaciones, Universidad Politécnica de Valencia, Spain.
| | - Rafael Molina
- Dpto. Ciencias de la Computación e Inteligencia Artificial, Universidad de Granada, Spain.
| | - Aggelos K Katsaggelos
- Dept. of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL, USA.
| |
Collapse
|
74
|
Jiao Y, Yuan J, Qiang Y, Fei S. Deep embeddings and logistic regression for rapid active learning in histopathological images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 212:106464. [PMID: 34736166 DOI: 10.1016/j.cmpb.2021.106464] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Accepted: 10/06/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Recognizing different tissue components is one of the most fundamental and essential works in digital pathology. Current methods are often based on convolutional neural networks (CNNs), which need numerous annotated samples for training. Creating large-scale histopathological datasets is labor-intensive, where interactive data annotation is a potential solution. METHODS We propose DELR (Deep Embedding-based Logistic Regression) to enable rapid model training and inference for histopathological image analysis. DELR utilizes a pretrained CNN to encode images as compact embeddings with low computational cost. The embeddings are then used to train a Logistic Regression model efficiently. We implemented DELR in an active learning framework, and validated it on three histopathological problems (binary, 4-category, and 8-category classification challenge for lung, breast, and colorectal cancer, respectively). We also investigated the influence of active learning strategy and type of the encoder. RESULTS On all the three datasets, DELR can achieve an area under curve (AUC) metric higher than 0.95 with only 100 image patches per class. Although its AUC is slightly lower than a fine-tuned CNN counterpart, DELR can be 536, 316, and 1481 times faster after pre-encoding. Moreover, DELR is proved to be compatible with a variety of active learning strategies and encoders. CONCLUSIONS DELR can achieve comparable accuracy to CNN with rapid running speed. These advantages make it a potential solution for real-time interactive data annotation.
Collapse
Affiliation(s)
- Yiping Jiao
- School of Automation, Southeast University, 2nd Sipailou Road, Nanjing, China.
| | - Jie Yuan
- School of Automation, Southeast University, 2nd Sipailou Road, Nanjing, China.
| | - Yong Qiang
- School of Automation, Southeast University, 2nd Sipailou Road, Nanjing, China.
| | - Shumin Fei
- School of Automation, Southeast University, 2nd Sipailou Road, Nanjing, China.
| |
Collapse
|
75
|
A comprehensive review of image analysis methods for microorganism counting: from classical image processing to deep learning approaches. Artif Intell Rev 2021; 55:2875-2944. [PMID: 34602697 PMCID: PMC8478609 DOI: 10.1007/s10462-021-10082-4] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Microorganisms such as bacteria and fungi play essential roles in many application fields, like biotechnique, medical technique and industrial domain. Microorganism counting techniques are crucial in microorganism analysis, helping biologists and related researchers quantitatively analyze the microorganisms and calculate their characteristics, such as biomass concentration and biological activity. However, traditional microorganism manual counting methods, such as plate counting method, hemocytometry and turbidimetry, are time-consuming, subjective and need complex operations, which are difficult to be applied in large-scale applications. In order to improve this situation, image analysis is applied for microorganism counting since the 1980s, which consists of digital image processing, image segmentation, image classification and suchlike. Image analysis-based microorganism counting methods are efficient comparing with traditional plate counting methods. In this article, we have studied the development of microorganism counting methods using digital image analysis. Firstly, the microorganisms are grouped as bacteria and other microorganisms. Then, the related articles are summarized based on image segmentation methods. Each part of the article is reviewed by methodologies. Moreover, commonly used image processing methods for microorganism counting are summarized and analyzed to find common technological points. More than 144 papers are outlined in this article. In conclusion, this paper provides new ideas for the future development trend of microorganism counting, and provides systematic suggestions for implementing integrated microorganism counting systems in the future. Researchers in other fields can refer to the techniques analyzed in this paper.
Collapse
|
76
|
Bussola N, Papa B, Melaiu O, Castellano A, Fruci D, Jurman G. Quantification of the Immune Content in Neuroblastoma: Deep Learning and Topological Data Analysis in Digital Pathology. Int J Mol Sci 2021; 22:8804. [PMID: 34445517 PMCID: PMC8396341 DOI: 10.3390/ijms22168804] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 08/10/2021] [Accepted: 08/11/2021] [Indexed: 02/06/2023] Open
Abstract
We introduce here a novel machine learning (ML) framework to address the issue of the quantitative assessment of the immune content in neuroblastoma (NB) specimens. First, the EUNet, a U-Net with an EfficientNet encoder, is trained to detect lymphocytes on tissue digital slides stained with the CD3 T-cell marker. The training set consists of 3782 images extracted from an original collection of 54 whole slide images (WSIs), manually annotated for a total of 73,751 lymphocytes. Resampling strategies, data augmentation, and transfer learning approaches are adopted to warrant reproducibility and to reduce the risk of overfitting and selection bias. Topological data analysis (TDA) is then used to define activation maps from different layers of the neural network at different stages of the training process, described by persistence diagrams (PD) and Betti curves. TDA is further integrated with the uniform manifold approximation and projection (UMAP) dimensionality reduction and the hierarchical density-based spatial clustering of applications with noise (HDBSCAN) algorithm for clustering, by the deep features, the relevant subgroups and structures, across different levels of the neural network. Finally, the recent TwoNN approach is leveraged to study the variation of the intrinsic dimensionality of the U-Net model. As the main task, the proposed pipeline is employed to evaluate the density of lymphocytes over the whole tissue area of the WSIs. The model achieves good results with mean absolute error 3.1 on test set, showing significant agreement between densities estimated by our EUNet model and by trained pathologists, thus indicating the potentialities of a promising new strategy in the quantification of the immune content in NB specimens. Moreover, the UMAP algorithm unveiled interesting patterns compatible with pathological characteristics, also highlighting novel insights into the dynamics of the intrinsic dataset dimensionality at different stages of the training process. All the experiments were run on the Microsoft Azure cloud platform.
Collapse
Affiliation(s)
- Nicole Bussola
- Data Science for Health, Fondazione Bruno Kessler, 38123 Trento, Italy; (N.B.); (B.P.)
- CIBIO Department, University of Trento, 38123 Trento, Italy
| | - Bruno Papa
- Data Science for Health, Fondazione Bruno Kessler, 38123 Trento, Italy; (N.B.); (B.P.)
| | - Ombretta Melaiu
- Department of Paediatric Haematology/Oncology and of Cell and Gene Therapy, Ospedale Pediatrico Bambino Gesù IRCCS, 00146 Rome, Italy; (O.M.); (A.C.); (D.F.)
| | - Aurora Castellano
- Department of Paediatric Haematology/Oncology and of Cell and Gene Therapy, Ospedale Pediatrico Bambino Gesù IRCCS, 00146 Rome, Italy; (O.M.); (A.C.); (D.F.)
| | - Doriana Fruci
- Department of Paediatric Haematology/Oncology and of Cell and Gene Therapy, Ospedale Pediatrico Bambino Gesù IRCCS, 00146 Rome, Italy; (O.M.); (A.C.); (D.F.)
| | - Giuseppe Jurman
- Data Science for Health, Fondazione Bruno Kessler, 38123 Trento, Italy; (N.B.); (B.P.)
| |
Collapse
|
77
|
Qu H, Minacapelli CD, Tait C, Gupta K, Bhurwal A, Catalano C, Dafalla R, Metaxas D, Rustgi VK. Training of computational algorithms to predict NAFLD activity score and fibrosis stage from liver histopathology slides. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 207:106153. [PMID: 34020377 DOI: 10.1016/j.cmpb.2021.106153] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 04/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND The incidence of non-alcoholic fatty liver disease (NAFLD) and its progressive form, non-alcoholic steatohepatitis (NASH), has been increasing for decades. Since the mainstay is lifestyle modification in this mainly asymptomatic condition, there is a need for accurate diagnostic methods. OBJECTIVES This study proposes a method with a computer-aided diagnosis (CAD) system to predict NAFLD Activity score (NAS scores-steatosis, lobular inflammation, and ballooning) and fibrosis stage from histopathology slides. METHODS A total of 87 pathology slides pairs (H&E and Trichrome-stained) were used for the study. Ground-truth NAS scores and fibrosis stages were previously identified by a pathologist. Each slide was split into 224 × 224 patches and fed into a feature extraction network to generate local features. These local features were processed and aggregated to obtain a global feature to predict the slide's scores. The effects of different training strategies, as well as training data with different staining and magnifications were explored. Four-fold cross validation was performed due to the small data size. Area Under the Receiver Operating Curve (AUROC) was utilized to evaluate the prediction performance of the machine-learning algorithm. RESULTS Predictive accuracy for all subscores was high in comparison with pathologist assessment. There was no difference among the 3 magnifications (5x, 10x, 20x) for NAS-steatosis and fibrosis stage tasks. A larger magnification (20x) achieved better performance for NAS-lobular scores. Middle-level magnification was best for NAS-ballooning task. Trichrome slides are better for fibrosis stage prediction and NAS-ballooning score prediction task. NAS-steatosis prediction had the best performance (AUC 90.48%) in the model. A good performance was observed with fibrosis stage prediction (AUC 83.85%) as well as NAS-ballooning prediction (AUC 81.06%). CONCLUSIONS These results were robust. The method proposed proved to be effective in predicting NAFLD Activity score and fibrosis stage from histopathology slides. The algorithms are an aid in having an accurate and systematic diagnosis in a condition that affects hundreds of millions of patients globally.
Collapse
Affiliation(s)
- Hui Qu
- Computational Biomedicine Imaging and Modeling Center, Department of Computer Science, Rutgers University, Piscataway, New Jersey, USA.
| | - Carlos D Minacapelli
- Rutgers Robert Wood Johnson Medical School, Division of Gastroenterology and Hepatology, New Brunswick, New Jersey, USA; Center for Liver Diseases and Masses, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey, USA.
| | - Christopher Tait
- Rutgers Robert Wood Johnson Medical School, Division of Gastroenterology and Hepatology, New Brunswick, New Jersey, USA; Center for Liver Diseases and Masses, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey, USA.
| | - Kapil Gupta
- Rutgers Robert Wood Johnson Medical School, Division of Gastroenterology and Hepatology, New Brunswick, New Jersey, USA; Center for Liver Diseases and Masses, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey, USA.
| | - Abhishek Bhurwal
- Rutgers Robert Wood Johnson Medical School, Division of Gastroenterology and Hepatology, New Brunswick, New Jersey, USA; Center for Liver Diseases and Masses, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey, USA.
| | - Carolyn Catalano
- Rutgers Robert Wood Johnson Medical School, Division of Gastroenterology and Hepatology, New Brunswick, New Jersey, USA; Center for Liver Diseases and Masses, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey, USA.
| | - Randa Dafalla
- Department of Pathology and Laboratory Medicine, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey, USA.
| | - Dimitris Metaxas
- Computational Biomedicine Imaging and Modeling Center, Department of Computer Science, Rutgers University, Piscataway, New Jersey, USA.
| | - Vinod K Rustgi
- Rutgers Robert Wood Johnson Medical School, Division of Gastroenterology and Hepatology, New Brunswick, New Jersey, USA; Center for Liver Diseases and Masses, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey, USA.
| |
Collapse
|
78
|
Jiao Y, Li J, Qian C, Fei S. Deep learning-based tumor microenvironment analysis in colon adenocarcinoma histopathological whole-slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 204:106047. [PMID: 33789213 DOI: 10.1016/j.cmpb.2021.106047] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 03/06/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Colon cancer is a fatal disease, and a comprehensive understanding of the tumor microenvironment (TME) could lead to better risk stratification, prognosis prediction, and therapy management. In this paper, we focused on the automatic evaluation of TME in giga-pixel digital histopathology whole-slide images. METHODS A convolutional neural network is used to recognize nine different content presented in colon cancer whole-slide images. Several implementation details, including the foreground filtering and stain normalization are discussed. Based on the whole-slide segmentation, several TME descriptors are quantified and correlated with the clinical outcome by Kaplan-Meier analysis and Cox regression. Specifically, the stroma, tumor, necrosis, and lymphocyte components are discussed. RESULTS We validated the method on colon adenocarcinoma cases from The Cancer Genome Atlas project. The result shows that the stroma is an independent predictor of progression-free interval (PFI) after corrected by age and pathological stage, with a hazard ratio of 1.665 (95%CI: 1.110~2.495, p = 0.014). High-level necrosis component and lymphocytes component tend to be correlated with poor PFI, with a hazard ratio of 1.552 (95%CI: 0.943~2.554, p = 0.084) and 1.512 (95%CI: 0.979~2.336, p = 0.062), respectively. CONCLUSIONS The result reveals the complex role of the tumor microenvironment in colon adenocarcinoma, and the quantified descriptors are potential predictors of disease progression. The method could be considered for risk stratification and targeted therapy and extend to other types of cancer, leading to a better understanding of the tumor microenvironment.
Collapse
Affiliation(s)
- Yiping Jiao
- Shool of Automation, Southeast University, 2nd Sipailou Road, Nanjing, China.
| | - Junhong Li
- Luoyang Central Hospital affiliated to Zhengzhou University, Luoyang, China
| | - Chenqi Qian
- Jiangsu Chunyu Education Group CO., 88th Zhongshan North Road, Nanjing, China
| | - Shumin Fei
- Shool of Automation, Southeast University, 2nd Sipailou Road, Nanjing, China.
| |
Collapse
|
79
|
Fast and accurate automated recognition of the dominant cells from fecal images based on Faster R-CNN. Sci Rep 2021; 11:10361. [PMID: 33990662 PMCID: PMC8121882 DOI: 10.1038/s41598-021-89863-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 04/30/2021] [Indexed: 01/04/2023] Open
Abstract
Fecal samples can easily be collected and are representative of a person’s current health state; therefore, the demand for routine fecal examination has increased sharply. However, manual operation may pollute the samples, and low efficiency limits the general examination speed; therefore, automatic analysis is needed. Nevertheless, recognition exhaustion time and accuracy remain major challenges in automatic testing. Here, we introduce a fast and efficient cell-detection algorithm based on the Faster-R-CNN technique: the Resnet-152 convolutional neural network architecture. Additionally, a region proposal network and a network combined with principal component analysis are proposed for cell location and recognition in microscopic images. Our algorithm achieved a mean average precision of 84% and a 723 ms detection time per sample for 40,560 fecal images. Thus, this approach may provide a solid theoretical basis for real-time detection in routine clinical examinations while accelerating the process to satisfy increasing demand.
Collapse
|
80
|
Koh JEW, De Michele S, Sudarshan VK, Jahmunah V, Ciaccio EJ, Ooi CP, Gururajan R, Gururajan R, Oh SL, Lewis SK, Green PH, Bhagat G, Acharya UR. Automated interpretation of biopsy images for the detection of celiac disease using a machine learning approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106010. [PMID: 33831693 DOI: 10.1016/j.cmpb.2021.106010] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 02/15/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Celiac disease is an autoimmune disease occurring in about 1 in 100 people worldwide. Early diagnosis and efficient treatment are crucial in mitigating the complications that are associated with untreated celiac disease, such as intestinal lymphoma and malignancy, and the subsequent high morbidity. The current diagnostic methods using small intestinal biopsy histopathology, endoscopy, and video capsule endoscopy (VCE) involve manual interpretation of photomicrographs or images, which can be time-consuming and difficult, with inter-observer variability. In this paper, a machine learning technique was developed for the automation of biopsy image analysis to detect and classify villous atrophy based on modified Marsh scores. This is one of the first studies to employ conventional machine learning to automate the use of biopsy images for celiac disease detection and classification. METHODS The Steerable Pyramid Transform (SPT) method was used to obtain sub bands from which various types of entropy and nonlinear features were computed. All extracted features were automatically classified into two-class and multi-class, using six classifiers. RESULTS An accuracy of 88.89%, was achieved for the classification of two-class villous abnormalities based on analysis of Hematoxylin and Eosin (H&E) stained biopsy images. Similarly, an accuracy of 82.92% was achieved for the two-class classification of red-green-blue (RGB) biopsy images. Also, an accuracy of 72% was achieved in the classification of multi-class biopsy images. CONCLUSION The results obtained are promising, and demonstrate the possibility of automating biopsy image interpretation using machine learning. This can assist pathologists in accelerating the diagnostic process without bias, resulting in greater accuracy, and ultimately, earlier access to treatment.
Collapse
Affiliation(s)
- Joel En Wei Koh
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore
| | - Simona De Michele
- Department of Pathology and Cell Biology, Columbia University Irving Medical Center, USA
| | - Vidya K Sudarshan
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - V Jahmunah
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore
| | - Edward J Ciaccio
- Department of Medicine, Celiac Disease Center, Columbia University Irving Medical Center, USA
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Raj Gururajan
- School of Business, University of Southern Queensland Springfield, Australia
| | | | - Shu Lih Oh
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore
| | - Suzanne K Lewis
- Department of Medicine, Celiac Disease Center, Columbia University Irving Medical Center, USA
| | - Peter H Green
- Department of Medicine, Celiac Disease Center, Columbia University Irving Medical Center, USA
| | - Govind Bhagat
- Department of Medicine, Celiac Disease Center, Columbia University Irving Medical Center, USA; Department of Pathology and Cell Biology, Columbia University Irving Medical Center, USA
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; School of Science and Technology, Singapore University of Social Sciences, Singapore; School of Business, University of Southern Queensland Springfield, Australia; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan; International Research Organization for Advanced Science and Technology (IROAST) Kumamoto University, Kumamoto, Japan.
| |
Collapse
|
81
|
You Z, Jiang M, Shi Z, Ning X, Shi C, Du S, Hérard AS, Jan C, Souedet N, Delzescaux T. Evaluation of automated segmentation algorithms for neurons in macaque cerebral microscopic images. Microsc Res Tech 2021; 84:2311-2324. [PMID: 33908123 DOI: 10.1002/jemt.23786] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 03/21/2021] [Accepted: 04/07/2021] [Indexed: 11/12/2022]
Abstract
Accurate cerebral neuron segmentation is required before neuron counting and neuron morphological analysis. Numerous algorithms for neuron segmentation have been published, but they are mainly evaluated using limited subsets from a specific anatomical region, targeting neurons of clear contrast and/or neurons with similar staining intensity. It is thus unclear how these algorithms perform on cerebral neurons in diverse anatomical regions. In this article, we introduce and reliably evaluate existing machine learning algorithms using a data set of microscopy images of macaque brain. This data set highlights various anatomical regions (e.g., cortex, caudate, thalamus, claustrum, putamen, hippocampus, subiculum, lateral geniculate, globus pallidus, etc.), poor contrast, and staining intensity differences of neurons. The evaluation was performed using 10 architectures of six classic machine learning algorithms in terms of typical Recall, Precision, F-score, aggregated Jaccard index (AJI), as well as a performance ranking of algorithms. F-score of most of the algorithms is superior to 0.7. Deep learning algorithms facilitate generally higher F-scores. U-net with suitable layer depth has been evaluated to be excellent classifiers with F-score of 0.846 and 0.837 when performing cross validation. The evaluation and analysis indicate the performance gap among algorithms in various anatomical regions and the strengths and limitations of each algorithm. The comparative result highlights at the same time the importance and difficulty of neuron segmentation and provides clues for future improvement. To the best of our knowledge, this work is the first comprehensive study for neuron segmentation in such large-scale anatomical regions.
Collapse
Affiliation(s)
- Zhenzhen You
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China.,CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Fontenay-aux-Roses, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Ming Jiang
- National Laboratory of Radar Signal Processing, Xidian University, Xi'an, China
| | - Zhenghao Shi
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - Xiaojuan Ning
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - Cheng Shi
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - Shuangli Du
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - Anne-Sophie Hérard
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Fontenay-aux-Roses, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Caroline Jan
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Fontenay-aux-Roses, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Nicolas Souedet
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Fontenay-aux-Roses, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Thierry Delzescaux
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Fontenay-aux-Roses, Université Paris-Saclay, Gif-sur-Yvette, France
| |
Collapse
|
82
|
Salvi M, Molinari F, Iussich S, Muscatello LV, Pazzini L, Benali S, Banco B, Abramo F, De Maria R, Aresu L. Histopathological Classification of Canine Cutaneous Round Cell Tumors Using Deep Learning: A Multi-Center Study. Front Vet Sci 2021; 8:640944. [PMID: 33869320 PMCID: PMC8044886 DOI: 10.3389/fvets.2021.640944] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Accepted: 03/08/2021] [Indexed: 01/12/2023] Open
Abstract
Canine cutaneous round cell tumors (RCT) represent one of the routine diagnostic challenges for veterinary pathologists. Computer-aided approaches are developed to overcome these restrictions and to increase accuracy and consistency of diagnosis. These systems are also of high benefit reducing errors when a large number of cases are screened daily. In this study we describe ARCTA (Automated Round Cell Tumors Assessment), a fully automated algorithm for cutaneous RCT classification and mast cell tumors grading in canine histopathological images. ARCTA employs a deep learning strategy and was developed on 416 RCT images and 213 mast cell tumors images. In the test set, our algorithm exhibited an excellent classification performance in both RCT classification (accuracy: 91.66%) and mast cell tumors grading (accuracy: 100%). Misdiagnoses were encountered for histiocytomas in the train set and for melanomas in the test set. For mast cell tumors the reduction of a grade was observed in the train set, but not in the test set. To the best of our knowledge, the proposed model is the first fully automated algorithm in histological images specifically developed for veterinary medicine. Being very fast (average computational time 2.63 s), this algorithm paves the way for an automated and effective evaluation of canine tumors.
Collapse
Affiliation(s)
- Massimo Salvi
- PoliToBIOMed Lab, Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Filippo Molinari
- PoliToBIOMed Lab, Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Selina Iussich
- Department of Veterinary Sciences, University of Turin, Turin, Italy
| | - Luisa Vera Muscatello
- Department of Veterinary Medical Sciences, University of Bologna, Bologna, Italy.,MyLav-Laboratorio La Vallonea, Milan, Italy
| | | | | | | | - Francesca Abramo
- Department of Veterinary Sciences, University of Pisa, Pisa, Italy
| | | | - Luca Aresu
- Department of Veterinary Sciences, University of Turin, Turin, Italy
| |
Collapse
|
83
|
Dabass M, Vashisth S, Vig R. Attention-Guided deep atrous-residual U-Net architecture for automated gland segmentation in colon histopathology images. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100784] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
|
84
|
AIM in Surgical Pathology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_278-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|