1
|
White BS, Woo XY, Koc S, Sheridan T, Neuhauser SB, Wang S, Evrard YA, Chen L, Foroughi pour A, Landua JD, Mashl RJ, Davies SR, Fang B, Rosa MG, Evans KW, Bailey MH, Chen Y, Xiao M, Rubinstein JC, Sanderson BJ, Lloyd MW, Domanskyi S, Dobrolecki LE, Fujita M, Fujimoto J, Xiao G, Fields RC, Mudd JL, Xu X, Hollingshead MG, Jiwani S, Acevedo S, Davis-Dusenbery BN, Robinson PN, Moscow JA, Doroshow JH, Mitsiades N, Kaochar S, Pan CX, Carvajal-Carmona LG, Welm AL, Welm BE, Govindan R, Li S, Davies MA, Roth JA, Meric-Bernstam F, Xie Y, Herlyn M, Ding L, Lewis MT, Bult CJ, Dean DA, Chuang JH. A Pan-Cancer Patient-Derived Xenograft Histology Image Repository with Genomic and Pathologic Annotations Enables Deep Learning Analysis. Cancer Res 2024; 84:2060-2072. [PMID: 39082680 PMCID: PMC11217732 DOI: 10.1158/0008-5472.can-23-1349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 10/13/2023] [Accepted: 03/27/2024] [Indexed: 08/04/2024]
Abstract
Patient-derived xenografts (PDX) model human intra- and intertumoral heterogeneity in the context of the intact tissue of immunocompromised mice. Histologic imaging via hematoxylin and eosin (H&E) staining is routinely performed on PDX samples, which could be harnessed for computational analysis. Prior studies of large clinical H&E image repositories have shown that deep learning analysis can identify intercellular and morphologic signals correlated with disease phenotype and therapeutic response. In this study, we developed an extensive, pan-cancer repository of >1,000 PDX and paired parental tumor H&E images. These images, curated from the PDX Development and Trial Centers Research Network Consortium, had a range of associated genomic and transcriptomic data, clinical metadata, pathologic assessments of cell composition, and, in several cases, detailed pathologic annotations of neoplastic, stromal, and necrotic regions. The amenability of these images to deep learning was highlighted through three applications: (i) development of a classifier for neoplastic, stromal, and necrotic regions; (ii) development of a predictor of xenograft-transplant lymphoproliferative disorder; and (iii) application of a published predictor of microsatellite instability. Together, this PDX Development and Trial Centers Research Network image repository provides a valuable resource for controlled digital pathology analysis, both for the evaluation of technical issues and for the development of computational image-based methods that make clinical predictions based on PDX treatment studies. Significance: A pan-cancer repository of >1,000 patient-derived xenograft hematoxylin and eosin-stained images will facilitate cancer biology investigations through histopathologic analysis and contributes important model system data that expand existing human histology repositories.
Collapse
Affiliation(s)
- Brian S. White
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | - Xing Yi Woo
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
- Bioinformatics Institute (BII), Agency for Science, Technology and Research (A*STAR), Singapore, Singapore.
| | - Soner Koc
- Velsera, Charlestown, Massachusetts.
| | - Todd Sheridan
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | - Shidan Wang
- University of Texas Southwestern Medical Center, Dallas, Texas.
| | - Yvonne A. Evrard
- Leidos Biomedical Research Inc., Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| | - Li Chen
- Leidos Biomedical Research Inc., Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| | - Ali Foroughi pour
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | - R. Jay Mashl
- Washington University School of Medicine, St. Louis, Missouri.
| | | | - Bingliang Fang
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | | | - Kurt W. Evans
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | - Matthew H. Bailey
- Simmons Center for Cancer Research, Brigham Young University, Provo, Utah.
| | - Yeqing Chen
- The Wistar Institute, Philadelphia, Pennsylvania.
| | - Min Xiao
- The Wistar Institute, Philadelphia, Pennsylvania.
| | | | | | | | - Sergii Domanskyi
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | - Maihi Fujita
- Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah.
| | - Junya Fujimoto
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | - Guanghua Xiao
- University of Texas Southwestern Medical Center, Dallas, Texas.
| | - Ryan C. Fields
- Washington University School of Medicine, St. Louis, Missouri.
| | | | - Xiaowei Xu
- The Wistar Institute, Philadelphia, Pennsylvania.
| | | | - Shahanawaz Jiwani
- Leidos Biomedical Research Inc., Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| | | | | | | | - Peter N. Robinson
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | | | | | | | | | | | - Alana L. Welm
- Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah.
| | - Bryan E. Welm
- Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah.
| | | | - Shunqiang Li
- Washington University School of Medicine, St. Louis, Missouri.
| | | | - Jack A. Roth
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | | | - Yang Xie
- University of Texas Southwestern Medical Center, Dallas, Texas.
| | | | - Li Ding
- Washington University School of Medicine, St. Louis, Missouri.
| | | | | | | | - Jeffrey H. Chuang
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| |
Collapse
|
2
|
Ruiz-Casado JL, Molina-Cabello MA, Luque-Baena RM. Enhancing Histopathological Image Classification Performance through Synthetic Data Generation with Generative Adversarial Networks. SENSORS (BASEL, SWITZERLAND) 2024; 24:3777. [PMID: 38931561 PMCID: PMC11207853 DOI: 10.3390/s24123777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Revised: 06/08/2024] [Accepted: 06/09/2024] [Indexed: 06/28/2024]
Abstract
Breast cancer is the second most common cancer worldwide, primarily affecting women, while histopathological image analysis is one of the possibile methods used to determine tumor malignancy. Regarding image analysis, the application of deep learning has become increasingly prevalent in recent years. However, a significant issue is the unbalanced nature of available datasets, with some classes having more images than others, which may impact the performance of the models due to poorer generalizability. A possible strategy to avoid this problem is downsampling the class with the most images to create a balanced dataset. Nevertheless, this approach is not recommended for small datasets as it can lead to poor model performance. Instead, techniques such as data augmentation are traditionally used to address this issue. These techniques apply simple transformations such as translation or rotation to the images to increase variability in the dataset. Another possibility is using generative adversarial networks (GANs), which can generate images from a relatively small training set. This work aims to enhance model performance in classifying histopathological images by applying data augmentation using GANs instead of traditional techniques.
Collapse
Affiliation(s)
- Jose L. Ruiz-Casado
- ITIS Software, University of Málaga, C/ Arquitecto Francisco Peñalosa, 18, 29010 Malaga, Spain; (J.L.R.-C.); (M.A.M.-C.)
| | - Miguel A. Molina-Cabello
- ITIS Software, University of Málaga, C/ Arquitecto Francisco Peñalosa, 18, 29010 Malaga, Spain; (J.L.R.-C.); (M.A.M.-C.)
- Instituto de Investigación Biomédica de Málaga y Plataforma en Nanomedicina-IBIMA Plataforma BIONAND, Avenida Severo Ochoa, 35, 29590 Malaga, Spain
| | - Rafael M. Luque-Baena
- ITIS Software, University of Málaga, C/ Arquitecto Francisco Peñalosa, 18, 29010 Malaga, Spain; (J.L.R.-C.); (M.A.M.-C.)
- Instituto de Investigación Biomédica de Málaga y Plataforma en Nanomedicina-IBIMA Plataforma BIONAND, Avenida Severo Ochoa, 35, 29590 Malaga, Spain
| |
Collapse
|
3
|
Agbley BLY, Li JP, Haq AU, Bankas EK, Mawuli CB, Ahmad S, Khan S, Khan AR. Federated Fusion of Magnified Histopathological Images for Breast Tumor Classification in the Internet of Medical Things. IEEE J Biomed Health Inform 2024; 28:3389-3400. [PMID: 37028353 DOI: 10.1109/jbhi.2023.3256974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/16/2023]
Abstract
Breast tumor detection and classification on the Internet of Medical Things (IoMT) can be automated with the potential of Artificial Intelligence (AI). Deep learning models rely on large datasets, however, challenges arise when dealing with sensitive medical data. Restrictions on sharing these medical data result in limited publicly available datasets thereby impacting the performance of the deep learning models. To address this issue, we propose an approach that combines different magnification factors of histopathological images using a residual network and information fusion in Federated Learning (FL). FL is employed to preserve the privacy of patient data, while enabling the creation of a global model. Using the BreakHis dataset, we compare the performance of FL with centralized learning (CL). We also performed visualizations for explainable AI. The final models obtained become available for deployment on internal IoMT systems in healthcare institutions for timely diagnosis and treatment. Our results demonstrate that the proposed approach outperforms existing works in the literature on multiple metrics.
Collapse
|
4
|
Choukali MA, Amirani MC, Valizadeh M, Abbasi A, Komeili M. Pseudo-class part prototype networks for interpretable breast cancer classification. Sci Rep 2024; 14:10341. [PMID: 38710757 PMCID: PMC11074258 DOI: 10.1038/s41598-024-60743-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 04/26/2024] [Indexed: 05/08/2024] Open
Abstract
Interpretability in machine learning has become increasingly important as machine learning is being used in more and more applications, including those with high-stakes consequences such as healthcare where Interpretability has been regarded as a key to the successful adoption of machine learning models. However, using confounding/irrelevant information in making predictions by deep learning models, even the interpretable ones, poses critical challenges to their clinical acceptance. That has recently drawn researchers' attention to issues beyond the mere interpretation of deep learning models. In this paper, we first investigate application of an inherently interpretable prototype-based architecture, known as ProtoPNet, for breast cancer classification in digital pathology and highlight its shortcomings in this application. Then, we propose a new method that uses more medically relevant information and makes more accurate and interpretable predictions. Our method leverages the clustering concept and implicitly increases the number of classes in the training dataset. The proposed method learns more relevant prototypes without any pixel-level annotated data. To have a more holistic assessment, in addition to classification accuracy, we define a new metric for assessing the degree of interpretability based on the comments of a group of skilled pathologists. Experimental results on the BreakHis dataset show that the proposed method effectively improves the classification accuracy and interpretability by respectively 8 % and 18 % . Therefore, the proposed method can be seen as a step toward implementing interpretable deep learning models for the detection of breast cancer using histopathology images.
Collapse
Affiliation(s)
| | - Mehdi Chehel Amirani
- Department of Electrical and Computer Engineering, Urmia University, Urmia, Iran
| | - Morteza Valizadeh
- Department of Electrical and Computer Engineering, Urmia University, Urmia, Iran.
| | - Ata Abbasi
- Cellular and Molecular Research Center, Cellular and Molecular Medicine Research Institute, Urmia University of Medical Sciences, Urmia, Iran
- Department of Pathology, Faculty of Medicine, Urmia University of medical sciences, Urmia, Iran
| | - Majid Komeili
- School of Computer Science, Carleton University, Ottawa, Canada
| |
Collapse
|
5
|
McCaffrey C, Jahangir C, Murphy C, Burke C, Gallagher WM, Rahman A. Artificial intelligence in digital histopathology for predicting patient prognosis and treatment efficacy in breast cancer. Expert Rev Mol Diagn 2024; 24:363-377. [PMID: 38655907 DOI: 10.1080/14737159.2024.2346545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 04/19/2024] [Indexed: 04/26/2024]
Abstract
INTRODUCTION Histological images contain phenotypic information predictive of patient outcomes. Due to the heavy workload of pathologists, the time-consuming nature of quantitatively assessing histological features, and human eye limitations to recognize spatial patterns, manually extracting prognostic information in routine pathological workflows remains challenging. Digital pathology has facilitated the mining and quantification of these features utilizing whole-slide image (WSI) scanners and artificial intelligence (AI) algorithms. AI algorithms to identify image-based biomarkers from the tumor microenvironment (TME) have the potential to revolutionize the field of oncology, reducing delays between diagnosis and prognosis determination, allowing for rapid stratification of patients and prescription of optimal treatment regimes, thereby improving patient outcomes. AREAS COVERED In this review, the authors discuss how AI algorithms and digital pathology can predict breast cancer patient prognosis and treatment outcomes using image-based biomarkers, along with the challenges of adopting this technology in clinical settings. EXPERT OPINION The integration of AI and digital pathology presents significant potential for analyzing the TME and its diagnostic, prognostic, and predictive value in breast cancer patients. Widespread clinical adoption of AI faces ethical, regulatory, and technical challenges, although prospective trials may offer reassurance and promote uptake, ultimately improving patient outcomes by reducing diagnosis-to-prognosis delivery delays.
Collapse
Affiliation(s)
- Christine McCaffrey
- UCD School of Biomolecular and Biomedical Science, UCD Conway Institute, University College Dublin, Dublin, Ireland
| | - Chowdhury Jahangir
- UCD School of Biomolecular and Biomedical Science, UCD Conway Institute, University College Dublin, Dublin, Ireland
| | - Clodagh Murphy
- UCD School of Biomolecular and Biomedical Science, UCD Conway Institute, University College Dublin, Dublin, Ireland
| | - Caoimbhe Burke
- UCD School of Biomolecular and Biomedical Science, UCD Conway Institute, University College Dublin, Dublin, Ireland
| | - William M Gallagher
- UCD School of Biomolecular and Biomedical Science, UCD Conway Institute, University College Dublin, Dublin, Ireland
| | - Arman Rahman
- UCD School of Medicine, UCD Conway Institute, University College Dublin, Dublin, Ireland
| |
Collapse
|
6
|
Asif S, Zhao M, Li Y, Tang F, Zhu Y. CGO-ensemble: Chaos game optimization algorithm-based fusion of deep neural networks for accurate Mpox detection. Neural Netw 2024; 173:106183. [PMID: 38382397 DOI: 10.1016/j.neunet.2024.106183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/19/2023] [Accepted: 02/15/2024] [Indexed: 02/23/2024]
Abstract
The rising global incidence of human Mpox cases necessitates prompt and accurate identification for effective disease control. Previous studies have predominantly delved into traditional ensemble methods for detection, we introduce a novel approach by leveraging a metaheuristic-based ensemble framework. In this research, we present an innovative CGO-Ensemble framework designed to elevate the accuracy of detecting Mpox infection in patients. Initially, we employ five transfer learning base models that integrate feature integration layers and residual blocks. These components play a crucial role in capturing significant features from the skin images, thereby enhancing the models' efficacy. In the next step, we employ a weighted averaging scheme to consolidate predictions generated by distinct models. To achieve the optimal allocation of weights for each base model in the ensemble process, we leverage the Chaos Game Optimization (CGO) algorithm. This strategic weight assignment enhances classification outcomes considerably, surpassing the performance of randomly assigned weights. Implementing this approach yields notably enhanced prediction accuracy compared to using individual models. We evaluate the effectiveness of our proposed approach through comprehensive experiments conducted on two widely recognized benchmark datasets: the Mpox Skin Lesion Dataset (MSLD) and the Mpox Skin Image Dataset (MSID). To gain insights into the decision-making process of the base models, we have performed Gradient Class Activation Mapping (Grad-CAM) analysis. The experimental results showcase the outstanding performance of the CGO-ensemble, achieving an impressive accuracy of 100% on MSLD and 94.16% on MSID. Our approach significantly outperforms other state-of-the-art optimization algorithms, traditional ensemble methods, and existing techniques in the context of Mpox detection on these datasets. These findings underscore the effectiveness and superiority of the CGO-Ensemble in accurately identifying Mpox cases, highlighting its potential in disease detection and classification.
Collapse
Affiliation(s)
- Sohaib Asif
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Ming Zhao
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Yangfan Li
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Fengxiao Tang
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Yusen Zhu
- School of Mathematics, Hunan University, Changsha, China
| |
Collapse
|
7
|
Li J, Cheng J, Meng L, Yan H, He Y, Shi H, Guan T, Han A. DeepTree: Pathological Image Classification Through Imitating Tree-Like Strategies of Pathologists. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1501-1512. [PMID: 38090840 DOI: 10.1109/tmi.2023.3341846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Digitization of pathological slides has promoted the research of computer-aided diagnosis, in which artificial intelligence analysis of pathological images deserves attention. Appropriate deep learning techniques in natural images have been extended to computational pathology. Still, they seldom take into account prior knowledge in pathology, especially the analysis process of lesion morphology by pathologists. Inspired by the diagnosis decision of pathologists, we design a novel deep learning architecture based on tree-like strategies called DeepTree. It imitates pathological diagnosis methods, designed as a binary tree structure, to conditionally learn the correlation between tissue morphology, and optimizes branches to finetune the performance further. To validate and benchmark DeepTree, we build a dataset of frozen lung cancer tissues and design experiments on a public dataset of breast tumor subtypes and our dataset. Results show that the deep learning architecture based on tree-like strategies makes the pathological image classification more accurate, transparent, and convincing. Simultaneously, prior knowledge based on diagnostic strategies yields superior representation ability compared to alternative methods. Our proposed methodology helps improve the trust of pathologists in artificial intelligence analysis and promotes the practical clinical application of pathology-assisted diagnosis.
Collapse
|
8
|
Yang F, Xu Z, Wang H, Sun L, Zhai M, Zhang J. A hybrid feature selection algorithm combining information gain and grouping particle swarm optimization for cancer diagnosis. PLoS One 2024; 19:e0290332. [PMID: 38466662 PMCID: PMC10927139 DOI: 10.1371/journal.pone.0290332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 08/04/2023] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Cancer diagnosis based on machine learning has become a popular application direction. Support vector machine (SVM), as a classical machine learning algorithm, has been widely used in cancer diagnosis because of its advantages in high-dimensional and small sample data. However, due to the high-dimensional feature space and high feature redundancy of gene expression data, SVM faces the problem of poor classification effect when dealing with such data. METHODS Based on this, this paper proposes a hybrid feature selection algorithm combining information gain and grouping particle swarm optimization (IG-GPSO). The algorithm firstly calculates the information gain values of the features and ranks them in descending order according to the value. Then, ranked features are grouped according to the information index, so that the features in the group are close, and the features outside the group are sparse. Finally, grouped features are searched using grouping PSO and evaluated according to in-group and out-group. RESULTS Experimental results show that the average accuracy (ACC) of the SVM on the feature subset selected by the IG-GPSO is 98.50%, which is significantly better than the traditional feature selection algorithm. Compared with KNN, the classification effect of the feature subset selected by the IG-GPSO is still optimal. In addition, the results of multiple comparison tests show that the feature selection effect of the IG-GPSO is significantly better than that of traditional feature selection algorithms. CONCLUSION The feature subset selected by IG-GPSO not only has the best classification effect, but also has the least feature scale (FS). More importantly, the IG-GPSO significantly improves the ACC of SVM in cancer diagnostic.
Collapse
Affiliation(s)
- Fangyuan Yang
- Department of Gynecologic Oncology, The First Affiliated Hospital of Henan Polytechnic University, Jiaozuo, Henan, China
| | - Zhaozhao Xu
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, China
| | - Hong Wang
- Department of Gynecologic Oncology, The First Affiliated Hospital of Henan Polytechnic University, Jiaozuo, Henan, China
| | - Lisha Sun
- Department of Gynecologic Oncology, The First Affiliated Hospital of Henan Polytechnic University, Jiaozuo, Henan, China
| | - Mengjiao Zhai
- Department of Gynecologic Oncology, The First Affiliated Hospital of Henan Polytechnic University, Jiaozuo, Henan, China
| | - Juan Zhang
- Department of Gynecologic Oncology, The First Affiliated Hospital of Henan Polytechnic University, Jiaozuo, Henan, China
| |
Collapse
|
9
|
Yao J, Han L, Guo G, Zheng Z, Cong R, Huang X, Ding J, Yang K, Zhang D, Han J. Position-based anchor optimization for point supervised dense nuclei detection. Neural Netw 2024; 171:159-170. [PMID: 38091760 DOI: 10.1016/j.neunet.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 10/10/2023] [Accepted: 12/04/2023] [Indexed: 01/29/2024]
Abstract
Nuclei detection is one of the most fundamental and challenging problems in histopathological image analysis, which can localize nuclei to provide effective computer-aided cancer diagnosis, treatment decision, and prognosis. The fully-supervised nuclei detector requires a large number of nuclei annotations on high-resolution digital images, which is time-consuming and needs human annotators with professional knowledge. In recent years, weakly-supervised learning has attracted significant attention in reducing the labeling burden. However, detecting dense nuclei of complex crowded distribution and diverse appearances remains a challenge. To solve this problem, we propose a novel point-supervised dense nuclei detection framework that introduces position-based anchor optimization to complete morphology-based pseudo-label supervision. Specifically, we first generate cellular-level pseudo labels (CPL) for the detection head via a morphology-based mechanism, which can help to build a baseline point-supervised detection network. Then, considering the crowded distribution of the dense nuclei, we propose a mechanism called Position-based Anchor-quality Estimation (PAE), which utilizes the positional deviation between an anchor and its corresponding point label to suppress low-quality detections far from each nucleus. Finally, to better handle the diverse appearances of nuclei, an Adaptive Anchor Selector (AAS) operation is proposed to automatically select positive and negative anchors according to morphological and positional statistical characteristics of nuclei. We conduct comprehensive experiments on two widely used benchmarks, MO and Lizard, using ResNet50 and PVTv2 as backbones. The results demonstrate that the proposed approach has superior capacity compared with other state-of-the-art methods. In particularly, in dense nuclei scenarios, our method can achieve 95.1% performance of the fully-supervised approach. The code is available at https://github.com/NucleiDet/DenseNucleiDet.
Collapse
Affiliation(s)
- Jieru Yao
- Brain and Artificial Intelligence Lab, School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, 710072, China
| | - Longfei Han
- School of Computer Science, Beijing Technology and Business University, Beijing, 100048, China; Hefei Comprehensive National Science Center, Hefei, Anhui, 230088, China
| | - Guangyu Guo
- Brain and Artificial Intelligence Lab, School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, 710072, China
| | - Zhaohui Zheng
- Department of Clinical Immunology, Xijing Hospital, The Fourth Military Medical University, Xi'an, Shaanxi, 710032, China.
| | - Runmin Cong
- School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250100, China
| | - Xiankai Huang
- Beijing Technology and Business University, Beijing, 100048, China
| | - Jin Ding
- Department of Clinical Immunology, Xijing Hospital, The Fourth Military Medical University, Xi'an, Shaanxi, 710032, China
| | - Kaihui Yang
- School of software, Nanchang University, Nanchang, Jiangxi, 330031, China
| | - Dingwen Zhang
- Brain and Artificial Intelligence Lab, School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, 710072, China; Hefei Comprehensive National Science Center, Hefei, Anhui, 230088, China; Department of Clinical Immunology, Xijing Hospital, The Fourth Military Medical University, Xi'an, Shaanxi, 710032, China.
| | - Junwei Han
- Hefei Comprehensive National Science Center, Hefei, Anhui, 230088, China
| |
Collapse
|
10
|
Fernández-Aranzamendi EG, Castillo-Araníbar PR, San Román Castillo EG, Oller BS, Ventura-Zaa L, Eguiluz-Rodriguez G, González-Posadas V, Segovia-Vargas D. Dielectric Characterization of Ex-Vivo Breast Tissues: Differentiation of Tumor Types through Permittivity Measurements. Cancers (Basel) 2024; 16:793. [PMID: 38398184 PMCID: PMC10886458 DOI: 10.3390/cancers16040793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/04/2024] [Accepted: 02/12/2024] [Indexed: 02/25/2024] Open
Abstract
Early analysis and diagnosis of breast tumors is essential for either quickly launching a treatment or for seeing the evolution of patients who, for instance, have already undergone chemotherapy treatment. Once tissues are excised, histological analysis is the most frequent tool used to characterize benign or malignant tumors. Dielectric microwave spectroscopy makes use of an open-ended coaxial probe in the 1-8 GHz frequency range to quickly identify the type of tumor (ductal carcinoma, lobular carcinoma, mucinous carcinoma and fibroadenoma). The experiment was undertaken with data from 70 patients who had already undergone chemotherapy treatment, which helped to electrically map the histological tissues with their electric permittivity. Thus, the variations in the permittivity of different types of tumors reveal distinctive patterns: benign tumors have permittivity values lower than 35, while malignant ones range between 40 and 60. For example, at a frequency of 2 GHz, the measured permittivity was 45.6 for ductal carcinoma, 33.1 for lobular carcinoma, 59.5 for mucinous carcinoma, and 27.6 for benign tumors. This differentiation remains consistent in a frequency range of 1 to 4.5 GHz. These results highlight the effectiveness of these measurements in the classification of breast tumors, providing a valuable tool for quick and accurate diagnosis and effective treatment.
Collapse
Affiliation(s)
- Elizabeth G. Fernández-Aranzamendi
- Department of Signal Theory and Communications, University Carlos III of Madrid, 28911 Madrid, Spain; (E.G.S.R.C.); (B.S.O.); (V.G.-P.)
- Department de Ingeniería Eléctrica y Electrónica, Universidad Católica San Pablo, Arequipa 04001, Peru;
| | | | - Ebert G. San Román Castillo
- Department of Signal Theory and Communications, University Carlos III of Madrid, 28911 Madrid, Spain; (E.G.S.R.C.); (B.S.O.); (V.G.-P.)
| | - Belén S. Oller
- Department of Signal Theory and Communications, University Carlos III of Madrid, 28911 Madrid, Spain; (E.G.S.R.C.); (B.S.O.); (V.G.-P.)
| | - Luz Ventura-Zaa
- Department of Oncology Medicine, Regional Institute of Neoplastic Diseases, Arequipa 04002, Peru; (L.V.-Z.); (G.E.-R.)
| | - Gelber Eguiluz-Rodriguez
- Department of Oncology Medicine, Regional Institute of Neoplastic Diseases, Arequipa 04002, Peru; (L.V.-Z.); (G.E.-R.)
| | - Vicente González-Posadas
- Department of Signal Theory and Communications, University Carlos III of Madrid, 28911 Madrid, Spain; (E.G.S.R.C.); (B.S.O.); (V.G.-P.)
| | - Daniel Segovia-Vargas
- Department of Signal Theory and Communications, University Carlos III of Madrid, 28911 Madrid, Spain; (E.G.S.R.C.); (B.S.O.); (V.G.-P.)
| |
Collapse
|
11
|
Alzoubi I, Zhang L, Zheng Y, Loh C, Wang X, Graeber MB. PathoGraph: An Attention-Based Graph Neural Network Capable of Prognostication Based on CD276 Labelling of Malignant Glioma Cells. Cancers (Basel) 2024; 16:750. [PMID: 38398141 PMCID: PMC10886785 DOI: 10.3390/cancers16040750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 02/07/2024] [Accepted: 02/08/2024] [Indexed: 02/25/2024] Open
Abstract
Computerized methods have been developed that allow quantitative morphological analyses of whole slide images (WSIs), e.g., of immunohistochemical stains. The latter are attractive because they can provide high-resolution data on the distribution of proteins in tissue. However, many immunohistochemical results are complex because the protein of interest occurs in multiple locations (in different cells and also extracellularly). We have recently established an artificial intelligence framework, PathoFusion which utilises a bifocal convolutional neural network (BCNN) model for detecting and counting arbitrarily definable morphological structures. We have now complemented this model by adding an attention-based graph neural network (abGCN) for the advanced analysis and automated interpretation of such data. Classical convolutional neural network (CNN) models suffer from limitations when handling global information. In contrast, our abGCN is capable of creating a graph representation of cellular detail from entire WSIs. This abGCN method combines attention learning with visualisation techniques that pinpoint the location of informative cells and highlight cell-cell interactions. We have analysed cellular labelling for CD276, a protein of great interest in cancer immunology and a potential marker of malignant glioma cells/putative glioma stem cells (GSCs). We are especially interested in the relationship between CD276 expression and prognosis. The graphs permit predicting individual patient survival on the basis of GSC community features. Our experiments lay a foundation for the use of the BCNN-abGCN tool chain in automated diagnostic prognostication using immunohistochemically labelled histological slides, but the method is essentially generic and potentially a widely usable tool in medical research and AI based healthcare applications.
Collapse
Affiliation(s)
- Islam Alzoubi
- School of Computer Science, The University of Sydney, J12/1 Cleveland St, Darlington, Sydney, NSW 2008, Australia; (I.A.); (L.Z.)
| | - Lin Zhang
- School of Computer Science, The University of Sydney, J12/1 Cleveland St, Darlington, Sydney, NSW 2008, Australia; (I.A.); (L.Z.)
| | - Yuqi Zheng
- Ken Parker Brain Tumour Research Laboratories, Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2050, Australia; (Y.Z.); (C.L.)
| | - Christina Loh
- Ken Parker Brain Tumour Research Laboratories, Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2050, Australia; (Y.Z.); (C.L.)
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, J12/1 Cleveland St, Darlington, Sydney, NSW 2008, Australia; (I.A.); (L.Z.)
| | - Manuel B. Graeber
- Ken Parker Brain Tumour Research Laboratories, Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2050, Australia; (Y.Z.); (C.L.)
- University of Sydney Association of Professors (USAP), University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
12
|
Ortiz S, Rojas-Valenzuela I, Rojas F, Valenzuela O, Herrera LJ, Rojas I. Novel methodology for detecting and localizing cancer area in histopathological images based on overlapping patches. Comput Biol Med 2024; 168:107713. [PMID: 38000243 DOI: 10.1016/j.compbiomed.2023.107713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/07/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
Cancer disease is one of the most important pathologies in the world, as it causes the death of millions of people, and the cure of this disease is limited in most cases. Rapid spread is one of the most important features of this disease, so many efforts are focused on its early-stage detection and localization. Medicine has made numerous advances in the recent decades with the help of artificial intelligence (AI), reducing costs and saving time. In this paper, deep learning models (DL) are used to present a novel method for detecting and localizing cancerous zones in WSI images, using tissue patch overlay to improve performance results. A novel overlapping methodology is proposed and discussed, together with different alternatives to evaluate the labels of the patches overlapping in the same zone to improve detection performance. The goal is to strengthen the labeling of different areas of an image with multiple overlapping patch testing. The results show that the proposed method improves the traditional framework and provides a different approach to cancer detection. The proposed method, based on applying 3x3 step 2 average pooling filters on overlapping patch labels, provides a better result with a 12.9% correction percentage for misclassified patches on the HUP dataset and 15.8% on the CINIJ dataset. In addition, a filter is implemented to correct isolated patches that were also misclassified. Finally, a CNN decision threshold study is performed to analyze the impact of the threshold value on the accuracy of the model. The alteration of the threshold decision along with the filter for isolated patches and the proposed method for overlapping patches, corrects about 20% of the patches that are mislabeled in the traditional method. As a whole, the proposed method achieves an accuracy rate of 94.6%. The code is available at https://github.com/sergioortiz26/Cancer_overlapping_filter_WSI_images.
Collapse
Affiliation(s)
- Sergio Ortiz
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain.
| | - Ignacio Rojas-Valenzuela
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Fernando Rojas
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Olga Valenzuela
- Department of Applied Mathematics, University of Granada, Facultad de Ciencias, Avenida de la Fuente Nueva S/N CP:18071 Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Ignacio Rojas
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain.
| |
Collapse
|
13
|
Priya C V L, V G B, B R V, Ramachandran S. Deep learning approaches for breast cancer detection in histopathology images: A review. Cancer Biomark 2024; 40:1-25. [PMID: 38517775 PMCID: PMC11191493 DOI: 10.3233/cbm-230251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
BACKGROUND Breast cancer is one of the leading causes of death in women worldwide. Histopathology analysis of breast tissue is an essential tool for diagnosing and staging breast cancer. In recent years, there has been a significant increase in research exploring the use of deep-learning approaches for breast cancer detection from histopathology images. OBJECTIVE To provide an overview of the current state-of-the-art technologies in automated breast cancer detection in histopathology images using deep learning techniques. METHODS This review focuses on the use of deep learning algorithms for the detection and classification of breast cancer from histopathology images. We provide an overview of publicly available histopathology image datasets for breast cancer detection. We also highlight the strengths and weaknesses of these architectures and their performance on different histopathology image datasets. Finally, we discuss the challenges associated with using deep learning techniques for breast cancer detection, including the need for large and diverse datasets and the interpretability of deep learning models. RESULTS Deep learning techniques have shown great promise in accurately detecting and classifying breast cancer from histopathology images. Although the accuracy levels vary depending on the specific data set, image pre-processing techniques, and deep learning architecture used, these results highlight the potential of deep learning algorithms in improving the accuracy and efficiency of breast cancer detection from histopathology images. CONCLUSION This review has presented a thorough account of the current state-of-the-art techniques for detecting breast cancer using histopathology images. The integration of machine learning and deep learning algorithms has demonstrated promising results in accurately identifying breast cancer from histopathology images. The insights gathered from this review can act as a valuable reference for researchers in this field who are developing diagnostic strategies using histopathology images. Overall, the objective of this review is to spark interest among scholars in this complex field and acquaint them with cutting-edge technologies in breast cancer detection using histopathology images.
Collapse
Affiliation(s)
- Lakshmi Priya C V
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Biju V G
- Department of Electronics and Communication Engineering, College of Engineering Munnar, Kerala, India
| | - Vinod B R
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Sivakumar Ramachandran
- Department of Electronics and Communication Engineering, Government Engineering College Wayanad, Kerala, India
| |
Collapse
|
14
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
15
|
Voon W, Hum YC, Tee YK, Yap WS, Nisar H, Mokayed H, Gupta N, Lai KW. Evaluating the effectiveness of stain normalization techniques in automated grading of invasive ductal carcinoma histopathological images. Sci Rep 2023; 13:20518. [PMID: 37993544 PMCID: PMC10665422 DOI: 10.1038/s41598-023-46619-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 11/02/2023] [Indexed: 11/24/2023] Open
Abstract
Debates persist regarding the impact of Stain Normalization (SN) on recent breast cancer histopathological studies. While some studies propose no influence on classification outcomes, others argue for improvement. This study aims to assess the efficacy of SN in breast cancer histopathological classification, specifically focusing on Invasive Ductal Carcinoma (IDC) grading using Convolutional Neural Networks (CNNs). The null hypothesis asserts that SN has no effect on the accuracy of CNN-based IDC grading, while the alternative hypothesis suggests the contrary. We evaluated six SN techniques, with five templates selected as target images for the conventional SN techniques. We also utilized seven ImageNet pre-trained CNNs for IDC grading. The performance of models trained with and without SN was compared to discern the influence of SN on classification outcomes. The analysis unveiled a p-value of 0.11, indicating no statistically significant difference in Balanced Accuracy Scores between models trained with StainGAN-normalized images, achieving a score of 0.9196 (the best-performing SN technique), and models trained with non-normalized images, which scored 0.9308. As a result, we did not reject the null hypothesis, indicating that we found no evidence to support a significant discrepancy in effectiveness between stain-normalized and non-normalized datasets for IDC grading tasks. This study demonstrates that SN has a limited impact on IDC grading, challenging the assumption of performance enhancement through SN.
Collapse
Affiliation(s)
- Wingates Voon
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Yan Chai Hum
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia.
| | - Yee Kai Tee
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Wun-She Yap
- Department of Electrical and Electronic Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Humaira Nisar
- Department of Electronic Engineering, Faculty of Engineering and Green Technology, Universiti Tunku Abdul Rahman, 31900, Kampar, Malaysia
| | - Hamam Mokayed
- Department of Computer Science, Electrical and Space Engineering, Lulea University of Technology, Lulea, Sweden
| | - Neha Gupta
- School of Electronics Engineering, Vellore Institute of Technology, Amaravati, AP, India
| | - Khin Wee Lai
- Department of Biomedical Engineering, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| |
Collapse
|
16
|
Labrada A, Barkana BD. A Comprehensive Review of Computer-Aided Models for Breast Cancer Diagnosis Using Histopathology Images. Bioengineering (Basel) 2023; 10:1289. [PMID: 38002413 PMCID: PMC10669627 DOI: 10.3390/bioengineering10111289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Revised: 10/20/2023] [Accepted: 10/25/2023] [Indexed: 11/26/2023] Open
Abstract
Breast cancer is the second most common cancer in women who are mainly middle-aged and older. The American Cancer Society reported that the average risk of developing breast cancer sometime in their life is about 13%, and this incident rate has increased by 0.5% per year in recent years. A biopsy is done when screening tests and imaging results show suspicious breast changes. Advancements in computer-aided system capabilities and performance have fueled research using histopathology images in cancer diagnosis. Advances in machine learning and deep neural networks have tremendously increased the number of studies developing computerized detection and classification models. The dataset-dependent nature and trial-and-error approach of the deep networks' performance produced varying results in the literature. This work comprehensively reviews the studies published between 2010 and 2022 regarding commonly used public-domain datasets and methodologies used in preprocessing, segmentation, feature engineering, machine-learning approaches, classifiers, and performance metrics.
Collapse
Affiliation(s)
- Alberto Labrada
- Department of Electrical Engineering, The University of Bridgeport, Bridgeport, CT 06604, USA;
| | - Buket D. Barkana
- Department of Biomedical Engineering, The University of Akron, Akron, OH 44325, USA
| |
Collapse
|
17
|
Romero-Arias JR, González-Castro CA, Ramírez-Santiago G. A multiscale model of the role of microenvironmental factors in cell segregation and heterogeneity in breast cancer development. PLoS Comput Biol 2023; 19:e1011673. [PMID: 37992135 DOI: 10.1371/journal.pcbi.1011673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 12/06/2023] [Accepted: 11/08/2023] [Indexed: 11/24/2023] Open
Abstract
We analyzed a quantitative multiscale model that describes the epigenetic dynamics during the growth and evolution of an avascular tumor. A gene regulatory network (GRN) formed by a set of ten genes that are believed to play an important role in breast cancer development was kinetically coupled to the microenvironmental agents: glucose, estrogens, and oxygen. The dynamics of spontaneous mutations was described by a Yule-Furry master equation whose solution represents the probability that a given cell in the tissue undergoes a certain number of mutations at a given time. We assumed that the mutation rate is modified by a spatial gradient of nutrients. The tumor mass was simulated by means of cellular automata supplemented with a set of reaction diffusion equations that described the transport of microenvironmental agents. By analyzing the epigenetic state space described by the GRN dynamics, we found three attractors that were identified with cellular epigenetic states: normal, precancer and cancer. For two-dimensional (2D) and three-dimensional (3D) tumors we calculated the spatial distribution of the following quantities: (i) number of mutations, (ii) mutation of each gene and, (iii) phenotypes. Using estrogen as the principal microenvironmental agent that regulates cell proliferation process, we obtained tumor shapes for different values of estrogen consumption and supply rates. It was found that he majority of mutations occurred in cells that were located close to the 2D tumor perimeter or close to the 3D tumor surface. Also, it was found that the occurrence of different phenotypes in the tumor are controlled by estrogen concentration levels since they can change the individual cell threshold and gene expression levels. All results were consistently observed for 2D and 3D tumors.
Collapse
Affiliation(s)
- J Roberto Romero-Arias
- Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad de México, Mexico
| | | | | |
Collapse
|
18
|
Pati P, Jaume G, Ayadi Z, Thandiackal K, Bozorgtabar B, Gabrani M, Goksel O. Weakly supervised joint whole-slide segmentation and classification in prostate cancer. Med Image Anal 2023; 89:102915. [PMID: 37633177 DOI: 10.1016/j.media.2023.102915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 05/17/2023] [Accepted: 07/25/2023] [Indexed: 08/28/2023]
Abstract
The identification and segmentation of histological regions of interest can provide significant support to pathologists in their diagnostic tasks. However, segmentation methods are constrained by the difficulty in obtaining pixel-level annotations, which are tedious and expensive to collect for whole-slide images (WSI). Though several methods have been developed to exploit image-level weak-supervision for WSI classification, the task of segmentation using WSI-level labels has received very little attention. The research in this direction typically require additional supervision beyond image labels, which are difficult to obtain in real-world practice. In this study, we propose WholeSIGHT, a weakly-supervised method that can simultaneously segment and classify WSIs of arbitrary shapes and sizes. Formally, WholeSIGHT first constructs a tissue-graph representation of WSI, where the nodes and edges depict tissue regions and their interactions, respectively. During training, a graph classification head classifies the WSI and produces node-level pseudo-labels via post-hoc feature attribution. These pseudo-labels are then used to train a node classification head for WSI segmentation. During testing, both heads simultaneously render segmentation and class prediction for an input WSI. We evaluate the performance of WholeSIGHT on three public prostate cancer WSI datasets. Our method achieves state-of-the-art weakly-supervised segmentation performance on all datasets while resulting in better or comparable classification with respect to state-of-the-art weakly-supervised WSI classification methods. Additionally, we assess the generalization capability of our method in terms of segmentation and classification performance, uncertainty estimation, and model calibration. Our code is available at: https://github.com/histocartography/wholesight.
Collapse
Affiliation(s)
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber/Harvard Cancer Center, Boston, MA, USA
| | - Zeineb Ayadi
- IBM Research Europe, Zurich, Switzerland; EPFL, Lausanne, Switzerland
| | - Kevin Thandiackal
- IBM Research Europe, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland
| | | | | | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Sweden
| |
Collapse
|
19
|
Thomas L, Sheeja MK. Fourier ptychographic and deep learning using breast cancer histopathological image classification. JOURNAL OF BIOPHOTONICS 2023; 16:e202300194. [PMID: 37296518 DOI: 10.1002/jbio.202300194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 06/07/2023] [Accepted: 06/08/2023] [Indexed: 06/12/2023]
Abstract
Automated, as well as accurate classification with breast cancer histological images, was crucial for medical applications because of detecting malignant tumors via histopathological images. In this work create a Fourier ptychographic (FP) and deep learning using breast cancer histopathological image classification. Here the FP method used in the process begins with such a random guess that builds a high-resolution complex hologram, subsequently uses iterative retrieval using FP constraints to stitch around each other low-resolution multi-view means of production owned from either the hologram's high-resolution hologram's elemental images captured via integral imaging. Next, the feature extraction process includes entropy, geometrical features, and textural features. The entropy-based normalization is used to optimize the features. Finally, it attains the classification process of the proposed ENDNN classifies the breast cancer images into normal or abnormal. The experimental outcomes demonstrate that our presented technique overtakes the traditional techniques.
Collapse
Affiliation(s)
- Leena Thomas
- Department of Electronics & Communication Engineering, Sree Chitra Thirunal College of Engineering, Thiruvananthapuram, Kerala, India
- APJ Abdul Kalam Technological University, Kerala, India
- College of Engineering Kallooppara, Pathanamthitta, Kerala, India
| | - M K Sheeja
- Department of Electronics & Communication Engineering, Sree Chitra Thirunal College of Engineering, Thiruvananthapuram, Kerala, India
- APJ Abdul Kalam Technological University, Kerala, India
| |
Collapse
|
20
|
Flont M, Jastrzębska E. A Multi-Layer Breast Cancer Model to Study the Synergistic Effect of Photochemotherapy. MICROMACHINES 2023; 14:1806. [PMID: 37763969 PMCID: PMC10535669 DOI: 10.3390/mi14091806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 09/15/2023] [Accepted: 09/18/2023] [Indexed: 09/29/2023]
Abstract
Breast cancer is one of the most common cancers among women. The development of new and effective therapeutic approaches in the treatment of breast cancer is an important challenge in modern oncology. Two-dimensional (2D) cell cultures are most often used in the study of compounds with potential anti-tumor nature. However, it is necessary to develop advanced three-dimensional (3D) cell models that can, to some extent, reflect the physiological conditions. The use of miniature cancer-on-a-chip microfluidic systems can help to mimic the complex cancer microenvironment. In this report, we developed a 3D breast cancer model in the form of a cell multilayer, composed of stromal cells (HMF) and breast cancer parenchyma (MCF-7). The developed cell model was successfully used to analyze the effectiveness of combined sequential photochemotherapy, based on doxorubicin and meso-tetraphenylporphyrin. We proved that the key factor that allows achieving the synergistic effect of combination therapy are the order of drug administration to the cells and the sequence of therapeutic procedures. To the best of our knowledge, studies on the effectiveness of combination photochemotherapy depending on the sequence of the component drugs were performed for the first time under microfluidic conditions on a 3D multilayered model of breast cancer tissue.
Collapse
Affiliation(s)
- Magdalena Flont
- Faculty of Chemistry, Warsaw University of Technology, Noakowskiego 3, 00-664 Warsaw, Poland;
- Center for Advanced Materials and Technologies CEZAMAT, Warsaw University of Technology, Poleczki 19, 02-822 Warsaw, Poland
| | - Elżbieta Jastrzębska
- Faculty of Chemistry, Warsaw University of Technology, Noakowskiego 3, 00-664 Warsaw, Poland;
- Center for Advanced Materials and Technologies CEZAMAT, Warsaw University of Technology, Poleczki 19, 02-822 Warsaw, Poland
| |
Collapse
|
21
|
Hou Y, Zhang W, Cheng R, Zhang G, Guo Y, Hao Y, Xue H, Wang Z, Wang L, Bai Y. Meta-adaptive-weighting-based bilateral multi-dimensional refined space feature attention network for imbalanced breast cancer histopathological image classification. Comput Biol Med 2023; 164:107300. [PMID: 37557055 DOI: 10.1016/j.compbiomed.2023.107300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 07/06/2023] [Accepted: 07/28/2023] [Indexed: 08/11/2023]
Abstract
Breast cancer histopathological image automatic classification can reduce pathologists workload and provide accurate diagnosis. However, one challenge is that empirical datasets are usually imbalanced, resulting in poorer classification quality compared with conventional methods based on balanced datasets. The recently proposed bilateral branch network (BBN) tackles this problem through considering both representation and classifier learning to improve classification performance. We firstly apply bilateral sampling strategy to imbalanced breast cancer histopathological image classification and propose a meta-adaptive-weighting-based bilateral multi-dimensional refined space feature attention network (MAW-BMRSFAN). The model is composed of BMRSFAN and MAWN. Specifically, the refined space feature attention module (RSFAM) is based on convolutional long short-term memories (ConvLSTMs). It is designed to extract refined spatial features of different dimensions for image classification and is inserted into different layers of classification model. Meanwhile, the MAWN is proposed to model the mapping from a balanced meta-dataset to imbalanced dataset. It finds suitable weighting parameter for BMRSFAN more flexibly through adaptively learning from a small amount of balanced dataset directly. The experiments show that MAW-BMRSFAN performs better than previous methods. The recognition accuracy of MAW-BMRSFAN under four different magnifications still is higher than 80% even when unbalance factor is 16, indicating that MAW-BMRSFAN can make ideal performance under extreme imbalanced conditions.
Collapse
Affiliation(s)
- Yuchao Hou
- Department of Mathematics and Computer Science, Shanxi Normal University, Taiyuan 030031, China; State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan 030051, China
| | - Wendong Zhang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan 030051, China
| | - Rong Cheng
- School of Mathematics, North University of China, Taiyuan 030051, China
| | - Guojun Zhang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan 030051, China
| | - Yanjie Guo
- School of Mathematics and Statistics, Ningbo University, Ningbo 315211, China
| | - Yan Hao
- School of Mathematics and Statistics, Taiyuan Normal University, Taiyuan 030002, China
| | - Hongxin Xue
- Data Science and Technology, North University of China, Taiyuan 030051, China
| | - Zhihao Wang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan 030051, China
| | - Long Wang
- Healthcare Big Data Research Center, Shanxi Intelligence Institute of Big Data Technology and Innovation, Taiyuan 030000, China
| | - Yanping Bai
- School of Mathematics, North University of China, Taiyuan 030051, China.
| |
Collapse
|
22
|
Chappel JR, King ME, Fleming J, Eberlin LS, Reif DM, Baker ES. Aggregated Molecular Phenotype Scores: Enhancing Assessment and Visualization of Mass Spectrometry Imaging Data for Tissue-Based Diagnostics. Anal Chem 2023; 95:12913-12922. [PMID: 37579019 PMCID: PMC10561690 DOI: 10.1021/acs.analchem.3c02389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Mass spectrometry imaging (MSI) has gained increasing popularity for tissue-based diagnostics due to its ability to identify and visualize molecular characteristics unique to different phenotypes within heterogeneous samples. Data from MSI experiments are often assessed and visualized using various supervised and unsupervised statistical approaches. However, these approaches tend to fall short in identifying and concisely visualizing subtle, phenotype-relevant molecular changes. To address these shortcomings, we developed aggregated molecular phenotype (AMP) scores. AMP scores are generated using an ensemble machine learning approach to first select features differentiating phenotypes, weight the features using logistic regression, and combine the weights and feature abundances. AMP scores are then scaled between 0 and 1, with lower values generally corresponding to class 1 phenotypes (typically control) and higher scores relating to class 2 phenotypes. AMP scores, therefore, allow the evaluation of multiple features simultaneously and showcase the degree to which these features correlate with various phenotypes. Due to the ensembled approach, AMP scores are able to overcome limitations associated with individual models, leading to high diagnostic accuracy and interpretability. Here, AMP score performance was evaluated using metabolomic data collected from desorption electrospray ionization MSI. Initial comparisons of cancerous human tissues to their normal or benign counterparts illustrated that AMP scores distinguished phenotypes with high accuracy, sensitivity, and specificity. Furthermore, when combined with spatial coordinates, AMP scores allow visualization of tissue sections in one map with distinguished phenotypic borders, highlighting their diagnostic utility.
Collapse
Affiliation(s)
- Jessie R Chappel
- Bioinformatics Research Center, Department of Biological Sciences, North Carolina State University, Raleigh, North Carolina 27606, United States
| | - Mary E King
- Department of Surgery, Baylor College of Medicine, Houston, Texas 77030, United States
| | - Jonathon Fleming
- Bioinformatics Research Center, Department of Biological Sciences, North Carolina State University, Raleigh, North Carolina 27606, United States
| | - Livia S Eberlin
- Department of Surgery, Baylor College of Medicine, Houston, Texas 77030, United States
| | - David M Reif
- Predictive Toxicology Branch, Division of Translational Toxicology, National Institute of Environmental Health Sciences, Durham, North Carolina 27709, United States
| | - Erin S Baker
- Department of Chemistry, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27514, United States
| |
Collapse
|
23
|
Yin P, Zhou Z, Liu J, Jiang N, Zhang J, Liu S, Wang F, Wang L. A generalized AI method for pathology cancer diagnosis and prognosis prediction based on transfer learning and hierarchical split. Phys Med Biol 2023; 68:175039. [PMID: 37536319 DOI: 10.1088/1361-6560/aced34] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 08/03/2023] [Indexed: 08/05/2023]
Abstract
Objective.This study aims to propose a generalized AI method for pathology cancer diagnosis and prognosis prediction based on transfer learning and hierarchical split.Approach.We present a neural network framework for cancer diagnosis and prognosis prediction in pathological images. To enhance the network's depth and width, we employ a hierarchical split block (HS-Block) to create an AI-aided diagnosis system suitable for semi-supervised clinical settings with limited labeled samples and cross-domain tasks. By incorporating a lightweight convolution unit based on the HS-Block, we improve the feature information extraction capabilities of a regular network (RegNet). Additionally, we integrate a Convolutional Block Attention Module into the first and last convolutions to optimize the extraction of global features and local details. To address limited sample labels, we employ a dual-transfer learning (DTL) mechanism named DTL-HS-Regnet, enabling semi-supervised learning in clinical settings.Main results.Our proposed DTL-HS-Regnet model outperforms other advanced deep-learning models in three different types of cancer diagnosis tasks. It demonstrates superior feature extraction ability, achieving an average sensitivity, specificity, accuracy, and F1 score of 0.9987, 1.0000, 1.0000 and 0.9992, respectively. Furthermore, we evaluate the model's capability to directly extract prognosis prediction information from pathological images by constructing patient cohorts. The results show that the correlation between DTL-HS-Regnet predictions and the presence of cancer-associated fibroblasts is comparable to that of pathologists.Significance.Our proposed AI method offers a generalized approach for cancer diagnosis and prognosis prediction in pathology. The outstanding performance of the DTL-HS-Regnet model demonstrates its potential for improving current practices in image digital pathology, expanding the boundaries of cancer treatment in two critical areas.
Collapse
Affiliation(s)
- Pengzhi Yin
- School of automation, Central South University, 410083, People's Republic of China
| | - Zehao Zhou
- School of Software, Xinjiang University, 830001, People's Republic of China
| | - Jingze Liu
- School of Software, Xinjiang University, 830001, People's Republic of China
| | - Nan Jiang
- XiangYa School of Medicine, Central South University, 410083, People's Republic of China
| | - Junchao Zhang
- School of automation, Central South University, 410083, People's Republic of China
| | - Shiyu Liu
- XiangYa School of Medicine, Central South University, 410083, People's Republic of China
| | - Feiyang Wang
- XiangYa School of Medicine, Central South University, 410083, People's Republic of China
| | - Li Wang
- College of Computer Science and Technology, Tsinghua University, 100084, People's Republic of China
| |
Collapse
|
24
|
Wajeed MA, Tiwari S, Gupta R, Ahmad AJ, Agarwal S, Jamal SS, Hinga SK. A Breast Cancer Image Classification Algorithm with 2c Multiclass Support Vector Machine. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:3875525. [PMID: 37457494 PMCID: PMC10349674 DOI: 10.1155/2023/3875525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 03/14/2022] [Accepted: 03/17/2022] [Indexed: 07/18/2023]
Abstract
Breast cancer is the most frequent type of cancer in women; however, early identification has reduced the mortality rate associated with the condition. Studies have demonstrated that the earlier this sickness is detected by mammography, the lower the death rate. Breast mammography is a critical technique in the early identification of breast cancer since it can detect abnormalities in the breast months or years before a patient is aware of the presence of such abnormalities. Mammography is a type of breast scanning used in medical imaging that involves using x-rays to image the breasts. It is a method that produces high-resolution digital pictures of the breasts known as mammography. Immediately following the capture of digital images and transmission of those images to a piece of high-tech digital mammography equipment, our radiologists evaluate the photos to establish the specific position and degree of the sickness in the breast. When compared to the many classifiers typically used in the literature, the suggested Multiclass Support Vector Machine (MSVM) approach produces promising results, according to the authors. This method may pave the way for developing more advanced statistical characteristics based on most cancer prognostic models shortly. It is demonstrated in this paper that the suggested 2C algorithm with MSVM outperforms a decision tree model in terms of accuracy, which follows prior findings. According to our findings, new screening mammography technologies can increase the accuracy and accessibility of screening mammography around the world.
Collapse
Affiliation(s)
- Mohammed Abdul Wajeed
- Department of Computer Science and Engineering, Swami Vivekananda Institute of Technology, Secunderabad, Telangana, India
| | - Shivam Tiwari
- Department of Computer Science and Engineering, G L Bajaj Institute of Technology and Management, Greater Noida, Uttar Pradesh, India
| | - Rajat Gupta
- Engineering and Technology, Career Point University, Kota, Rajasthan, India
| | - Aamir Junaid Ahmad
- Department of Computer Science and Engineering, Maulana Azad College of Engineering and Technology, Patna, India
| | - Seema Agarwal
- SRM institute of Science and Technology, Delhi-NCR, Campus, Ghaziabad, India
| | - Sajjad Shaukat Jamal
- Department of Mathematics College of Science, King Khalid University, Abha, Saudi Arabia
| | - Simon Karanja Hinga
- Department of Electrical and Electronic Engineering, Technical University of Mombasa, Mombasa, Kenya
| |
Collapse
|
25
|
Chappel JR, King ME, Fleming J, Eberlin LS, Reif DM, Baker ES. Utilizing Aggregated Molecular Phenotype (AMP) Scores to Visualize Simultaneous Molecular Changes in Mass Spectrometry Imaging Data. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.01.543306. [PMID: 37333214 PMCID: PMC10274704 DOI: 10.1101/2023.06.01.543306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Mass spectrometry imaging (MSI) has gained increasing popularity for tissue-based diagnostics due to its ability to identify and visualize molecular characteristics unique to different phenotypes within heterogeneous samples. Data from MSI experiments are often visualized using single ion images and further analyzed using machine learning and multivariate statistics to identify m/z features of interest and create predictive models for phenotypic classification. However, often only a single molecule or m/z feature is visualized per ion image, and mainly categorical classifications are provided from the predictive models. As an alternative approach, we developed an aggregated molecular phenotype (AMP) scoring system. AMP scores are generated using an ensemble machine learning approach to first select features differentiating phenotypes, weight the features using logistic regression, and combine the weights and feature abundances. AMP scores are then scaled between 0 and 1, with lower values generally corresponding to class 1 phenotypes (typically control) and higher scores relating to class 2 phenotypes. AMP scores therefore allow the evaluation of multiple features simultaneously and showcase the degree to which these features correlate with various phenotypes, leading to high diagnostic accuracy and interpretability of predictive models. Here, AMP score performance was evaluated using metabolomic data collected from desorption electrospray ionization (DESI) MSI. Initial comparisons of cancerous human tissues to normal or benign counterparts illustrated that AMP scores distinguished phenotypes with high accuracy, sensitivity, and specificity. Furthermore, when combined with spatial coordinates, AMP scores allow visualization of tissue sections in one map with distinguished phenotypic borders, highlighting their diagnostic utility.
Collapse
Affiliation(s)
- Jessie R. Chappel
- Bioinformatics Research Center, Department of Biological Sciences, North Carolina State University, Raleigh, NC, USA
| | - Mary E. King
- Department of Surgery, Baylor College of Medicine, Houston, TX, USA
| | - Jonathon Fleming
- Bioinformatics Research Center, Department of Biological Sciences, North Carolina State University, Raleigh, NC, USA
| | - Livia S. Eberlin
- Department of Surgery, Baylor College of Medicine, Houston, TX, USA
| | - David M. Reif
- Predictive Toxicology Branch, Division of Translational Toxicology, National Institute of Environmental Health Sciences, Durham, NC, USA
| | - Erin S. Baker
- Department of Chemistry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
26
|
Ram S, Tang W, Bell AJ, Pal R, Spencer C, Buschhaus A, Hatt CR, diMagliano MP, Rehemtulla A, Rodríguez JJ, Galban S, Galban CJ. Lung cancer lesion detection in histopathology images using graph-based sparse PCA network. Neoplasia 2023; 42:100911. [PMID: 37269818 DOI: 10.1016/j.neo.2023.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/17/2023] [Indexed: 06/05/2023]
Abstract
Early detection of lung cancer is critical for improvement of patient survival. To address the clinical need for efficacious treatments, genetically engineered mouse models (GEMM) have become integral in identifying and evaluating the molecular underpinnings of this complex disease that may be exploited as therapeutic targets. Assessment of GEMM tumor burden on histopathological sections performed by manual inspection is both time consuming and prone to subjective bias. Therefore, an interplay of needs and challenges exists for computer-aided diagnostic tools, for accurate and efficient analysis of these histopathology images. In this paper, we propose a simple machine learning approach called the graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E). Our method comprises four steps: 1) cascaded graph-based sparse PCA, 2) PCA binary hashing, 3) block-wise histograms, and 4) support vector machine (SVM) classification. In our proposed architecture, graph-based sparse PCA is employed to learn the filter banks of the multiple stages of a convolutional network. This is followed by PCA hashing and block histograms for indexing and pooling. The meaningful features extracted from this GS-PCA are then fed to an SVM classifier. We evaluate the performance of the proposed algorithm on H&E slides obtained from an inducible K-rasG12D lung cancer mouse model using precision/recall rates, Fβ-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC) and show that our algorithm is efficient and provides improved detection accuracy compared to existing algorithms.
Collapse
Affiliation(s)
- Sundaresh Ram
- Departments of Radiology, and Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA.
| | - Wenfei Tang
- Department of Computer Science and Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Alexander J Bell
- Departments of Radiology, and Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Ravi Pal
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Cara Spencer
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI 48109, USA
| | | | - Charles R Hatt
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; Imbio LLC, Minneapolis, MN 55405, USA
| | - Marina Pasca diMagliano
- Departments of Surgery, and Cell and Developmental Biology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Alnawaz Rehemtulla
- Departments of Radiology, and Radiation Oncology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Jeffrey J Rodríguez
- Departments of Electrical and Computer Engineering, and Biomedical Engineering, The University of Arizona, Tucson, AZ 85721, USA
| | - Stefanie Galban
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Craig J Galban
- Departments of Radiology, and Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
27
|
Burrai GP, Gabrieli A, Polinas M, Murgia C, Becchere MP, Demontis P, Antuofermo E. Canine Mammary Tumor Histopathological Image Classification via Computer-Aided Pathology: An Available Dataset for Imaging Analysis. Animals (Basel) 2023; 13:ani13091563. [PMID: 37174600 PMCID: PMC10177203 DOI: 10.3390/ani13091563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/27/2023] [Accepted: 05/04/2023] [Indexed: 05/15/2023] Open
Abstract
Histopathology, the gold-standard technique in classifying canine mammary tumors (CMTs), is a time-consuming process, affected by high inter-observer variability. Digital (DP) and Computer-aided pathology (CAD) are emergent fields that will improve overall classification accuracy. In this study, the ability of the CAD systems to distinguish benign from malignant CMTs has been explored on a dataset-namely CMTD-of 1056 hematoxylin and eosin JPEG images from 20 benign and 24 malignant CMTs, with three different CAD systems based on the combination of a convolutional neural network (VGG16, Inception v3, EfficientNet), which acts as a feature extractor, and a classifier (support vector machines (SVM) or stochastic gradient boosting (SGB)), placed on top of the neural net. Based on a human breast cancer dataset (i.e., BreakHis) (accuracy from 0.86 to 0.91), our models were applied to the CMT dataset, showing accuracy from 0.63 to 0.85 across all architectures. The EfficientNet framework coupled with SVM resulted in the best performances with an accuracy from 0.82 to 0.85. The encouraging results obtained by the use of DP and CAD systems in CMTs provide an interesting perspective on the integration of artificial intelligence and machine learning technologies in cancer-related research.
Collapse
Affiliation(s)
- Giovanni P Burrai
- Department of Veterinary Medicine, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
- Mediterranean Center for Disease Control (MCDC), University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| | - Andrea Gabrieli
- Department of Veterinary Medicine, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| | - Marta Polinas
- Department of Veterinary Medicine, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| | - Claudio Murgia
- Department of Veterinary Medicine, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| | | | - Pierfranco Demontis
- Department of Chemical, Physical, Mathematical and Natural Sciences, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| | - Elisabetta Antuofermo
- Department of Veterinary Medicine, University of Sassari, Via Vienna 2, 07100 Sassari, Italy
- Mediterranean Center for Disease Control (MCDC), University of Sassari, Via Vienna 2, 07100 Sassari, Italy
| |
Collapse
|
28
|
Bhausaheb DP, Kashyap KL. Shuffled Shepherd Deer Hunting Optimization based Deep Neural Network for Breast Cancer Classification using Breast Histopathology Images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
29
|
Ding K, Zhou M, Wang H, Gevaert O, Metaxas D, Zhang S. A Large-scale Synthetic Pathological Dataset for Deep Learning-enabled Segmentation of Breast Cancer. Sci Data 2023; 10:231. [PMID: 37085533 PMCID: PMC10121551 DOI: 10.1038/s41597-023-02125-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 03/31/2023] [Indexed: 04/23/2023] Open
Abstract
The success of training computer-vision models heavily relies on the support of large-scale, real-world images with annotations. Yet such an annotation-ready dataset is difficult to curate in pathology due to the privacy protection and excessive annotation burden. To aid in computational pathology, synthetic data generation, curation, and annotation present a cost-effective means to quickly enable data diversity that is required to boost model performance at different stages. In this study, we introduce a large-scale synthetic pathological image dataset paired with the annotation for nuclei semantic segmentation, termed as Synthetic Nuclei and annOtation Wizard (SNOW). The proposed SNOW is developed via a standardized workflow by applying the off-the-shelf image generator and nuclei annotator. The dataset contains overall 20k image tiles and 1,448,522 annotated nuclei with the CC-BY license. We show that SNOW can be used in both supervised and semi-supervised training scenarios. Extensive results suggest that synthetic-data-trained models are competitive under a variety of model training settings, expanding the scope of better using synthetic images for enhancing downstream data-driven clinical tasks.
Collapse
Affiliation(s)
- Kexin Ding
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28262, USA
| | - Mu Zhou
- Sensebrain Research, San Jose, CA, 95131, USA
| | - He Wang
- Department of Pathology, Yale University, New Haven, CT, 06520, USA
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research, Department of Medicine and Biomedical Data Science, Stanford University, Stanford, CA, 94305, USA
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, New Brunswick, NJ, 08901, USA
| | - Shaoting Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| |
Collapse
|
30
|
Marrón-Esquivel JM, Duran-Lopez L, Linares-Barranco A, Dominguez-Morales JP. A comparative study of the inter-observer variability on Gleason grading against Deep Learning-based approaches for prostate cancer. Comput Biol Med 2023; 159:106856. [PMID: 37075600 DOI: 10.1016/j.compbiomed.2023.106856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 02/07/2023] [Accepted: 03/30/2023] [Indexed: 04/08/2023]
Abstract
BACKGROUND Among all the cancers known today, prostate cancer is one of the most commonly diagnosed in men. With modern advances in medicine, its mortality has been considerably reduced. However, it is still a leading type of cancer in terms of deaths. The diagnosis of prostate cancer is mainly conducted by biopsy test. From this test, Whole Slide Images are obtained, from which pathologists diagnose the cancer according to the Gleason scale. Within this scale from 1 to 5, grade 3 and above is considered malignant tissue. Several studies have shown an inter-observer discrepancy between pathologists in assigning the value of the Gleason scale. Due to the recent advances in artificial intelligence, its application to the computational pathology field with the aim of supporting and providing a second opinion to the professional is of great interest. METHOD In this work, the inter-observer variability of a local dataset of 80 whole-slide images annotated by a team of 5 pathologists from the same group was analyzed at both area and label level. Four approaches were followed to train six different Convolutional Neural Network architectures, which were evaluated on the same dataset on which the inter-observer variability was analyzed. RESULTS An inter-observer variability of 0.6946 κ was obtained, with 46% discrepancy in terms of area size of the annotations performed by the pathologists. The best trained models achieved 0.826±0.014κ on the test set when trained with data from the same source. CONCLUSIONS The obtained results show that deep learning-based automatic diagnosis systems could help reduce the widely-known inter-observer variability that is present among pathologists and support them in their decision, serving as a second opinion or as a triage tool for medical centers.
Collapse
|
31
|
Zhu Z, Wang SH, Zhang YD. A Survey of Convolutional Neural Network in Breast Cancer. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 136:2127-2172. [PMID: 37152661 PMCID: PMC7614504 DOI: 10.32604/cmes.2023.025484] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/28/2022] [Indexed: 05/09/2023]
Abstract
Problems For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers. Aims A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD). Methods We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation. Conclusion Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.
Collapse
Affiliation(s)
| | | | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
32
|
MMSRNet: Pathological image super-resolution by multi-task and multi-scale learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
33
|
Guleria HV, Luqmani AM, Kothari HD, Phukan P, Patil S, Pareek P, Kotecha K, Abraham A, Gabralla LA. Enhancing the Breast Histopathology Image Analysis for Cancer Detection Using Variational Autoencoder. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:ijerph20054244. [PMID: 36901255 PMCID: PMC10002012 DOI: 10.3390/ijerph20054244] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 02/08/2023] [Accepted: 02/09/2023] [Indexed: 06/12/2023]
Abstract
A breast tissue biopsy is performed to identify the nature of a tumour, as it can be either cancerous or benign. The first implementations involved the use of machine learning algorithms. Random Forest and Support Vector Machine (SVM) were used to classify the input histopathological images into whether they were cancerous or non-cancerous. The implementations continued to provide promising results, and then Artificial Neural Networks (ANNs) were applied for this purpose. We propose an approach for reconstructing the images using a Variational Autoencoder (VAE) and the Denoising Variational Autoencoder (DVAE) and then use a Convolutional Neural Network (CNN) model. Afterwards, we predicted whether the input image was cancerous or non-cancerous. Our implementation provides predictions with 73% accuracy, which is greater than the results produced by our custom-built CNN on our dataset. The proposed architecture will prove to be a new field of research and a new area to be explored in the field of computer vision using CNN and Generative Modelling since it incorporates reconstructions of the original input images and provides predictions on them thereafter.
Collapse
Affiliation(s)
- Harsh Vardhan Guleria
- Symbiosis Institute of Technology, Symbiosis International University, Pune 412115, India
| | - Ali Mazhar Luqmani
- Symbiosis Institute of Technology, Symbiosis International University, Pune 412115, India
| | - Harsh Devendra Kothari
- Symbiosis Institute of Technology, Symbiosis International University, Pune 412115, India
| | - Priyanshu Phukan
- Symbiosis Institute of Technology, Symbiosis International University, Pune 412115, India
| | - Shruti Patil
- Symbiosis Institute of Technology, Symbiosis International University, Pune 412115, India
| | - Preksha Pareek
- Symbiosis Institute of Technology, Symbiosis International University, Pune 412115, India
| | - Ketan Kotecha
- Symbiosis Institute of Technology, Symbiosis International University, Pune 412115, India
| | - Ajith Abraham
- Faculty of Computing and Data Sciences, FLAME University, Lavale, Pune 412115, India
| | - Lubna Abdelkareim Gabralla
- Department of Computer Science and Information Technology, College of Applied, Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| |
Collapse
|
34
|
Strickler EAT, Thomas J, Thomas JP, Benjamin B, Shamsuddin R. Exploring a global interpretation mechanism for deep learning networks when predicting sepsis. Sci Rep 2023; 13:3067. [PMID: 36810645 PMCID: PMC9945464 DOI: 10.1038/s41598-023-30091-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 02/15/2023] [Indexed: 02/24/2023] Open
Abstract
The purpose of this study is to identify additional clinical features for sepsis detection through the use of a novel mechanism for interpreting black-box machine learning models trained and to provide a suitable evaluation for the mechanism. We use the publicly available dataset from the 2019 PhysioNet Challenge. It has around 40,000 Intensive Care Unit (ICU) patients with 40 physiological variables. Using Long Short-Term Memory (LSTM) as the representative black-box machine learning model, we adapted the Multi-set Classifier to globally interpret the black-box model for concepts it learned about sepsis. To identify relevant features, the result is compared against: (i) features used by a computational sepsis expert, (ii) clinical features from clinical collaborators, (iii) academic features from literature, and (iv) significant features from statistical hypothesis testing. Random Forest was found to be the computational sepsis expert because it had high accuracies for solving both the detection and early detection, and a high degree of overlap with clinical and literature features. Using the proposed interpretation mechanism and the dataset, we identified 17 features that the LSTM used for sepsis classification, 11 of which overlaps with the top 20 features from the Random Forest model, 10 with academic features and 5 with clinical features. Clinical opinion suggests, 3 LSTM features have strong correlation with some clinical features that were not identified by the mechanism. We also found that age, chloride ion concentration, pH and oxygen saturation should be investigated further for connection with developing sepsis. Interpretation mechanisms can bolster the incorporation of state-of-the-art machine learning models into clinical decision support systems, and might help clinicians to address the issue of early sepsis detection. The promising results from this study warrants further investigation into creation of new and improvement of existing interpretation mechanisms for black-box models, and into clinical features that are currently not used in clinical assessment of sepsis.
Collapse
Affiliation(s)
- Ethan A T Strickler
- Physics and Mathematics, East Central University, PO Box 385, Ada, OK, 74820, USA
| | - Joshua Thomas
- Department of Internal Medicine, Rush University Medical Center, 1700 W Van Buren St, 5th Floor, Chicago, IL, 60612, USA
| | - Johnson P Thomas
- Oklahoma State University, 201 Math and Science Building, Stillwater, OK, 74078, USA
| | - Bruce Benjamin
- School of Biomedical Sciences, Center for Health Sciences, 1111 W. 17th st., Tulsa, OK, 74107, USA
| | - Rittika Shamsuddin
- Oklahoma State University, 212 Math and Science Building, Stillwater, OK, 74078, USA.
| |
Collapse
|
35
|
Zhang H, He Y, Wu X, Huang P, Qin W, Wang F, Ye J, Huang X, Liao Y, Chen H, Guo L, Shi X, Luo L. PathNarratives: Data annotation for pathological human-AI collaborative diagnosis. Front Med (Lausanne) 2023; 9:1070072. [PMID: 36777158 PMCID: PMC9908590 DOI: 10.3389/fmed.2022.1070072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 12/22/2022] [Indexed: 01/27/2023] Open
Abstract
Pathology is the gold standard of clinical diagnosis. Artificial intelligence (AI) in pathology becomes a new trend, but it is still not widely used due to the lack of necessary explanations for pathologists to understand the rationale. Clinic-compliant explanations besides the diagnostic decision of pathological images are essential for AI model training to provide diagnostic suggestions assisting pathologists practice. In this study, we propose a new annotation form, PathNarratives, that includes a hierarchical decision-to-reason data structure, a narrative annotation process, and a multimodal interactive annotation tool. Following PathNarratives, we recruited 8 pathologist annotators to build a colorectal pathological dataset, CR-PathNarratives, containing 174 whole-slide images (WSIs). We further experiment on the dataset with classification and captioning tasks to explore the clinical scenarios of human-AI-collaborative pathological diagnosis. The classification tasks show that fine-grain prediction enhances the overall classification accuracy from 79.56 to 85.26%. In Human-AI collaboration experience, the trust and confidence scores from 8 pathologists raised from 3.88 to 4.63 with providing more details. Results show that the classification and captioning tasks achieve better results with reason labels, provide explainable clues for doctors to understand and make the final decision and thus can support a better experience of human-AI collaboration in pathological diagnosis. In the future, we plan to optimize the tools for the annotation process, and expand the datasets with more WSIs and covering more pathological domains.
Collapse
Affiliation(s)
- Heyu Zhang
- College of Engineering, Peking University, Beijing, China
| | - Yan He
- Department of Pathology, Longgang Central Hospital of Shenzhen, Shenzhen, China
| | - Xiaomin Wu
- College of Engineering, Peking University, Beijing, China
| | - Peixiang Huang
- College of Engineering, Peking University, Beijing, China
| | - Wenkang Qin
- College of Engineering, Peking University, Beijing, China
| | - Fan Wang
- College of Engineering, Peking University, Beijing, China
| | - Juxiang Ye
- Department of Pathology, School of Basic Medical Science, Peking University Health Science Center, Peking University Third Hospital, Beijing, China
| | - Xirui Huang
- Department of Pathology, Longgang Central Hospital of Shenzhen, Shenzhen, China
| | - Yanfang Liao
- Department of Pathology, Longgang Central Hospital of Shenzhen, Shenzhen, China
| | - Hang Chen
- College of Engineering, Peking University, Beijing, China
| | - Limei Guo
- Department of Pathology, School of Basic Medical Science, Peking University Health Science Center, Peking University Third Hospital, Beijing, China,*Correspondence: Limei Guo,
| | - Xueying Shi
- Department of Pathology, School of Basic Medical Science, Peking University Health Science Center, Peking University Third Hospital, Beijing, China,Xueying Shi,
| | - Lin Luo
- College of Engineering, Peking University, Beijing, China,Lin Luo,
| |
Collapse
|
36
|
Ogundokun RO, Misra S, Akinrotimi AO, Ogul H. MobileNet-SVM: A Lightweight Deep Transfer Learning Model to Diagnose BCH Scans for IoMT-Based Imaging Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:656. [PMID: 36679455 PMCID: PMC9863875 DOI: 10.3390/s23020656] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 12/02/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
Many individuals worldwide pass away as a result of inadequate procedures for prompt illness identification and subsequent treatment. A valuable life can be saved or at least extended with the early identification of serious illnesses, such as various cancers and other life-threatening conditions. The development of the Internet of Medical Things (IoMT) has made it possible for healthcare technology to offer the general public efficient medical services and make a significant contribution to patients' recoveries. By using IoMT to diagnose and examine BreakHis v1 400× breast cancer histology (BCH) scans, disorders may be quickly identified and appropriate treatment can be given to a patient. Imaging equipment having the capability of auto-analyzing acquired pictures can be used to achieve this. However, the majority of deep learning (DL)-based image classification approaches are of a large number of parameters and unsuitable for application in IoMT-centered imaging sensors. The goal of this study is to create a lightweight deep transfer learning (DTL) model suited for BCH scan examination and has a good level of accuracy. In this study, a lightweight DTL-based model "MobileNet-SVM", which is the hybridization of MobileNet and Support Vector Machine (SVM), for auto-classifying BreakHis v1 400× BCH images is presented. When tested against a real dataset of BreakHis v1 400× BCH images, the suggested technique achieved a training accuracy of 100% on the training dataset. It also obtained an accuracy of 91% and an F1-score of 91.35 on the test dataset. Considering how complicated BCH scans are, the findings are encouraging. The MobileNet-SVM model is ideal for IoMT imaging equipment in addition to having a high degree of precision. According to the simulation findings, the suggested model requires a small computation speed and time.
Collapse
Affiliation(s)
- Roseline Oluwaseun Ogundokun
- Department of Multimedia Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania
- Department of Computer Science, Landmark University, Omu Aran 251103, Kwara, Nigeria
| | - Sanjay Misra
- Department of Computer Science and Communication, Østfold University College, 1757 Halden, Norway
| | | | - Hasan Ogul
- Department of Computer Science and Communication, Østfold University College, 1757 Halden, Norway
| |
Collapse
|
37
|
Zhang H, Liu Z, Song M, Lu C. Hagnifinder: Recovering magnification information of digital histological images using deep learning. J Pathol Inform 2023; 14:100302. [PMID: 36923447 PMCID: PMC10009300 DOI: 10.1016/j.jpi.2023.100302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 02/02/2023] [Accepted: 02/11/2023] [Indexed: 02/18/2023] Open
Abstract
Background and objective Training a robust cancer diagnostic or prognostic artificial intelligent model using histology images requires a large number of representative cases with labels or annotations, which are difficult to obtain. The histology snapshots available in published papers or case reports can be used to enrich the training dataset. However, the magnifications of these invaluable snapshots are generally unknown, which limits their usage. Therefore, a robust magnification predictor is required for utilizing those diverse snapshot repositories consisting of different diseases. This paper presents a magnification prediction model named Hagnifinder for H&E-stained histological images. Methods Hagnifinder is a regression model based on a modified convolutional neural network (CNN) that contains 3 modules: Feature Extraction Module, Regression Module, and Adaptive Scaling Module (ASM). In the training phase, the Feature Extraction Module first extracts the image features. Secondly, the ASM is proposed to address the learned feature values uneven distribution problem. Finally, the Regression Module estimates the mapping between the regularized extracted features and the magnifications. We construct a new dataset for training a robust model, named Hagni40, consisting of 94 643 H&E-stained histology image patches at 40 different magnifications of 13 types of cancer based on The Cancer Genome Atlas. To verify the performance of the Hagnifinder, we measure the accuracy of the predictions by setting the maximum allowable difference values (0.5, 1, and 5) between the predicted magnification and the actual magnification. We compare Hagnifinder with state-of-the-art methods on a public dataset BreakHis and the Hagni40. Results The Hagnifinder provides consistent prediction accuracy, with a mean accuracy of 98.9%, across 40 different magnifications and 13 different cancer types when Resnet50 is used as the feature extractor. Compared with the state-of-the-art methods focusing on 4-5 levels of magnification classification, the Hagnifinder achieves the best and most comparable performance in the BreakHis and Hagni40 datasets. Conclusions The experimental results suggest that Hagnifinder can be a valuable tool for predicting the associated magnification of any given histology image.
Collapse
Affiliation(s)
- Hongtai Zhang
- School of Computer and Cyber Sciences, Communication University of China, Beijing 100024, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences),Southern Medical University, Guangzhou 510080, China.,Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| | - Mingli Song
- School of Computer and Cyber Sciences, Communication University of China, Beijing 100024, China.,State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing 100024, China
| | - Cheng Lu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences),Southern Medical University, Guangzhou 510080, China.,Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| |
Collapse
|
38
|
Yavuz A, Alpsoy A, Gedik EO, Celik MY, Bassorgun CI, Unal B, Elpek GO. Artificial intelligence applications in predicting the behavior of gastrointestinal cancers in pathology. Artif Intell Gastroenterol 2022; 3:142-162. [DOI: 10.35712/aig.v3.i5.142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 11/25/2022] [Accepted: 12/14/2022] [Indexed: 12/28/2022] Open
Abstract
Recent research has provided a wealth of data supporting the application of artificial intelligence (AI)-based applications in routine pathology practice. Indeed, it is clear that these methods can significantly support an accurate and rapid diagnosis by eliminating errors, increasing reliability, and improving workflow. In addition, the effectiveness of AI in the pathological evaluation of prognostic parameters associated with behavior, course, and treatment in many types of tumors has also been noted. Regarding gastrointestinal system (GIS) cancers, the contribution of AI methods to pathological diagnosis has been investigated in many studies. On the other hand, studies focusing on AI applications in evaluating parameters to determine tumor behavior are relatively few. For this purpose, the potential of AI models has been studied over a broad spectrum, from tumor subtyping to the identification of new digital biomarkers. The capacity of AI to infer genetic alterations of cancer tissues from digital slides has been demonstrated. Although current data suggest the merit of AI-based approaches in assessing tumor behavior in GIS cancers, a wide range of challenges still need to be solved, from laboratory infrastructure to improving the robustness of algorithms, before incorporating AI applications into real-life GIS pathology practice. This review aims to present data from AI applications in evaluating pathological parameters related to the behavior of GIS cancer with an overview of the opportunities and challenges encountered in implementing AI in pathology.
Collapse
Affiliation(s)
- Aysen Yavuz
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Anil Alpsoy
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Elif Ocak Gedik
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | | | | | - Betul Unal
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Gulsum Ozlem Elpek
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| |
Collapse
|
39
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
40
|
Wei T, Yuan X, Gao R, Johnston L, Zhou J, Wang Y, Kong W, Xie Y, Zhang Y, Xu D, Yu Z. Survival prediction of stomach cancer using expression data and deep learning models with histopathological images. Cancer Sci 2022; 114:690-701. [PMID: 36114747 PMCID: PMC9899622 DOI: 10.1111/cas.15592] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 08/29/2022] [Accepted: 09/12/2022] [Indexed: 11/30/2022] Open
Abstract
Accurately predicting patient survival is essential for cancer treatment decision. However, the prognostic prediction model based on histopathological images of stomach cancer patients is still yet to be developed. We propose a deep learning-based model (MultiDeepCox-SC) that predicts overall survival in patients with stomach cancer by integrating histopathological images, clinical data, and gene expression data. The MultiDeepCox-SC not only automatedly selects patches with more information for survival prediction, without manual labeling for histopathological images, but also identifies genetic and clinical risk factors associated with survival in stomach cancer. The prognostic accuracy of the MultiDeepCox-SC (C-index = 0.744) surpasses the result only based on histopathological image (C-index = 0.660). The risk score of our model was still an independent predictor of survival outcome after adjustment for potential confounders, including pathologic stage, grade, age, race, and gender on The Cancer Genome Atlas dataset (hazard ratio 1.555, p = 3.53e-08) and the external test set (hazard ratio 2.912, p = 9.42e-4). Our fully automated online prognostic tool based on histopathological images, clinical data, and gene expression data could be utilized to improve pathologists' efficiency and accuracy (https://yu.life.sjtu.edu.cn/DeepCoxSC).
Collapse
Affiliation(s)
- Ting Wei
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Xin Yuan
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Ruitian Gao
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Luke Johnston
- SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina,School of Mathematical SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Jie Zhou
- SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina,School of Mathematical SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Yifan Wang
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Weiming Kong
- Institute of Transactional MedicineShanghai Jiao Tong University School of MedicineShanghaiChina
| | - Yujing Xie
- SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina,School of Mathematical SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Yue Zhang
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Dakang Xu
- Faculty of Medical Laboratory Science, Ruijin Hospital, School of MedicineShanghai Jiao Tong UniversityShanghaiChina
| | - Zhangsheng Yu
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina,School of Mathematical SciencesShanghai Jiao Tong UniversityShanghaiChina,Clinical Research InstituteShanghai Jiao Tong University School of MedicineShanghaiChina
| |
Collapse
|
41
|
Breast cancer image analysis using deep learning techniques – a survey. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00703-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
42
|
A Diagnostic Model of Breast Cancer Based on Digital Mammogram Images Using Machine Learning Techniques. APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING 2022. [DOI: 10.1155/2022/3895976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Breast cancer disease is one of the most recorded cancers that lead to morbidity and maybe death among women around the world. Recent research statistics have exposed that one from 8 females in the USA and one from 10 females in Europe are contaminated by breast cancer. The challenge with this disease is how to develop a relaxed and fast diagnosing method. One of the attractive ways of early breast cancer diagnosis is based on the mammogram images analysis of the breast using a computer-aided diagnosing (CAD) tool. This paper firstly aimed to propose an efficient method for diagnosing tumors based on mammogram images of breasts using a machine learning approach. Secondly, this paper aimed to the development of a CAD software program for breast cancer diagnosing based on the proposed method in the first step. The followed step-by-step procedure of the proposed method is performed by passing the Mammographic Image Analysis Society (MIAS) through five steps of image preprocessing, image segmentation using seeded region growing (SRG) algorithm, feature extraction using different feature’s extraction classes, and important and effectiveness feature selection using the Sequential Forward Selection (SFS) technique, and finally, the Support Vector Machine (SVM) algorithm is used as a binary classifier in two classification levels. The first level classifier is used to categorize the given image as normal or abnormal while the second-level classifier is used for further classifying the abnormal image as either a malignant or benign cancer. The proposed method is studied and investigated in two phases: the training phase and the testing phase, with the MIAS dataset of mammogram images, using 70% and 30% ratios of dataset images for the training and testing sets, respectively. The practical implementation of the proposed method and the graphical user interface (GUI) CAD tool are carried out using MATLAB software. Experimental results of the proposed method have shown that the accuracy of the proposed method reached 100% in classifying images as normal and abnormal mammogram images while the classification accuracy for benign and malignant is equal to 87.1%.
Collapse
|
43
|
Hameed Z, Garcia-Zapirain B, Aguirre JJ, Isaza-Ruget MA. Multiclass classification of breast cancer histopathology images using multilevel features of deep convolutional neural network. Sci Rep 2022; 12:15600. [PMID: 36114214 PMCID: PMC9649689 DOI: 10.1038/s41598-022-19278-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 08/26/2022] [Indexed: 12/03/2022] Open
Abstract
Breast cancer is a common malignancy and a leading cause of cancer-related deaths in women worldwide. Its early diagnosis can significantly reduce the morbidity and mortality rates in women. To this end, histopathological diagnosis is usually followed as the gold standard approach. However, this process is tedious, labor-intensive, and may be subject to inter-reader variability. Accordingly, an automatic diagnostic system can assist to improve the quality of diagnosis. This paper presents a deep learning approach to automatically classify hematoxylin-eosin-stained breast cancer microscopy images into normal tissue, benign lesion, in situ carcinoma, and invasive carcinoma using our collected dataset. Our proposed model exploited six intermediate layers of the Xception (Extreme Inception) network to retrieve robust and abstract features from input images. First, we optimized the proposed model on the original (unnormalized) dataset using 5-fold cross-validation. Then, we investigated its performance on four normalized datasets resulting from Reinhard, Ruifrok, Macenko, and Vahadane stain normalization. For original images, our proposed framework yielded an accuracy of 98% along with a kappa score of 0.969. Also, it achieved an average AUC-ROC score of 0.998 as well as a mean AUC-PR value of 0.995. Specifically, for in situ carcinoma and invasive carcinoma, it offered sensitivity of 96% and 99%, respectively. For normalized images, the proposed architecture performed better for Makenko normalization compared to the other three techniques. In this case, the proposed model achieved an accuracy of 97.79% together with a kappa score of 0.965. Also, it attained an average AUC-ROC score of 0.997 and a mean AUC-PR value of 0.991. Especially, for in situ carcinoma and invasive carcinoma, it offered sensitivity of 96% and 99%, respectively. These results demonstrate that our proposed model outperformed the baseline AlexNet as well as state-of-the-art VGG16, VGG19, Inception-v3, and Xception models with their default settings. Furthermore, it can be inferred that although stain normalization techniques offered competitive performance, they could not surpass the results of the original dataset.
Collapse
|
44
|
din NMU, Dar RA, Rasool M, Assad A. Breast cancer detection using deep learning: Datasets, methods, and challenges ahead. Comput Biol Med 2022; 149:106073. [DOI: 10.1016/j.compbiomed.2022.106073] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 08/21/2022] [Accepted: 08/27/2022] [Indexed: 12/22/2022]
|
45
|
Chopra P, Junath N, Singh SK, Khan S, Sugumar R, Bhowmick M. Cyclic GAN Model to Classify Breast Cancer Data for Pathological Healthcare Task. BIOMED RESEARCH INTERNATIONAL 2022; 2022:6336700. [PMID: 35909482 PMCID: PMC9334078 DOI: 10.1155/2022/6336700] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 06/25/2022] [Accepted: 07/04/2022] [Indexed: 11/17/2022]
Abstract
An algorithm framework based on CycleGAN and an upgraded dual-path network (DPN) is suggested to address the difficulties of uneven staining in pathological pictures and difficulty of discriminating benign from malignant cells. CycleGAN is used for color normalization in pathological pictures to tackle the problem of uneven staining. However, the resultant detection model is ineffective. By overlapping the images, the DPN uses the addition of small convolution, deconvolution, and attention mechanisms to enhance the model's ability to classify the texture features of pathological images on the BreaKHis dataset. The parameters that are taken into consideration for measuring the accuracy of the proposed model are false-positive rate, false-negative rate, recall, precision, and F1 score. Several experiments are carried out over the selected parameters, such as making comparisons between benign and malignant classification accuracy under different normalization methods, comparison of accuracy of image level and patient level using different CNN models, correlating the correctness of DPN68-A network with different deep learning models and other classification algorithms at all magnifications. The results thus obtained have proved that the proposed model DPN68-A network can effectively classify the benign and malignant breast cancer pathological images at various magnifications. The proposed model also is able to better assist the pathologists in diagnosing the patients by synthesizing the images of different magnifications in the clinical stage.
Collapse
Affiliation(s)
- Pooja Chopra
- School of Computer Applications, Lovely Professional University, Phagwara, Punjab, India
| | - N. Junath
- University of Technology and Applied Science Ibri, Oman
| | - Sitesh Kumar Singh
- Department of Civil Engineering, Wollega University, Nekemte, Oromia, Ethiopia
| | - Shakir Khan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - R. Sugumar
- Department of Computer Science and Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai 601205, India
| | - Mithun Bhowmick
- Bengal College of Pharmaceutical Sciences and Research, Durgapur, West Bengal, India
| |
Collapse
|
46
|
Alzoubi I, Bao G, Zhang R, Loh C, Zheng Y, Cherepanoff S, Gracie G, Lee M, Kuligowski M, Alexander KL, Buckland ME, Wang X, Graeber MB. An Open-Source AI Framework for the Analysis of Single Cells in Whole-Slide Images with a Note on CD276 in Glioblastoma. Cancers (Basel) 2022; 14:3441. [PMID: 35884502 PMCID: PMC9316952 DOI: 10.3390/cancers14143441] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 07/10/2022] [Accepted: 07/13/2022] [Indexed: 02/04/2023] Open
Abstract
Routine examination of entire histological slides at cellular resolution poses a significant if not insurmountable challenge to human observers. However, high-resolution data such as the cellular distribution of proteins in tissues, e.g., those obtained following immunochemical staining, are highly desirable. Our present study extends the applicability of the PathoFusion framework to the cellular level. We illustrate our approach using the detection of CD276 immunoreactive cells in glioblastoma as an example. Following automatic identification by means of PathoFusion's bifocal convolutional neural network (BCNN) model, individual cells are automatically profiled and counted. Only discriminable cells selected through data filtering and thresholding were segmented for cell-level analysis. Subsequently, we converted the detection signals into the corresponding heatmaps visualizing the distribution of the detected cells in entire whole-slide images of adjacent H&E-stained sections using the Discrete Wavelet Transform (DWT). Our results demonstrate that PathoFusion is capable of autonomously detecting and counting individual immunochemically labelled cells with a high prediction performance of 0.992 AUC and 97.7% accuracy. The data can be used for whole-slide cross-modality analyses, e.g., relationships between immunochemical signals and anaplastic histological features. PathoFusion has the potential to be applied to additional problems that seek to correlate heterogeneous data streams and to serve as a clinically applicable, weakly supervised system for histological image analyses in (neuro)pathology.
Collapse
Affiliation(s)
- Islam Alzoubi
- School of Computer Science, The University of Sydney, J12/1 Cleveland St, Sydney, NSW 2008, Australia; (I.A.); (G.B.); (R.Z.)
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, J12/1 Cleveland St, Sydney, NSW 2008, Australia; (I.A.); (G.B.); (R.Z.)
| | - Rong Zhang
- School of Computer Science, The University of Sydney, J12/1 Cleveland St, Sydney, NSW 2008, Australia; (I.A.); (G.B.); (R.Z.)
| | - Christina Loh
- Ken Parker Brain Tumour Research Laboratories, Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2050, Australia; (C.L.); (Y.Z.)
| | - Yuqi Zheng
- Ken Parker Brain Tumour Research Laboratories, Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2050, Australia; (C.L.); (Y.Z.)
| | - Svetlana Cherepanoff
- St Vincent’s Hospital, Victoria Street, Darlinghurst, NSW 2010, Australia; (S.C.); (G.G.)
| | - Gary Gracie
- St Vincent’s Hospital, Victoria Street, Darlinghurst, NSW 2010, Australia; (S.C.); (G.G.)
| | - Maggie Lee
- Department of Neuropathology, RPA Hospital and Brain and Mind Centre, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (M.L.); (K.L.A.); (M.E.B.)
| | - Michael Kuligowski
- Sydney Microscopy and Microanalysis, The University of Sydney, Sydney, NSW 2006, Australia;
| | - Kimberley L. Alexander
- Department of Neuropathology, RPA Hospital and Brain and Mind Centre, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (M.L.); (K.L.A.); (M.E.B.)
- Neurosurgery Department, Chris O’Brien Lifehouse, Camperdown, NSW 2050, Australia
- School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2050, Australia
| | - Michael E. Buckland
- Department of Neuropathology, RPA Hospital and Brain and Mind Centre, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (M.L.); (K.L.A.); (M.E.B.)
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, J12/1 Cleveland St, Sydney, NSW 2008, Australia; (I.A.); (G.B.); (R.Z.)
| | - Manuel B. Graeber
- Ken Parker Brain Tumour Research Laboratories, Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney, Camperdown, NSW 2050, Australia; (C.L.); (Y.Z.)
| |
Collapse
|
47
|
Beyond the colors: enhanced deep learning on invasive ductal carcinoma. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07478-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
48
|
Wang CW, Chang CC, Lee YC, Lin YJ, Lo SC, Hsu PC, Liou YA, Wang CH, Chao TK. Weakly supervised deep learning for prediction of treatment effectiveness on ovarian cancer from histopathology images. Comput Med Imaging Graph 2022; 99:102093. [PMID: 35752000 DOI: 10.1016/j.compmedimag.2022.102093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 05/13/2022] [Accepted: 06/03/2022] [Indexed: 11/30/2022]
Abstract
Despite the progress made during the last two decades in the surgery and chemotherapy of ovarian cancer, more than 70 % of advanced patients are with recurrent cancer and decease. Surgical debulking of tumors following chemotherapy is the conventional treatment for advanced carcinoma, but patients with such treatment remain at great risk for recurrence and developing drug resistance, and only about 30 % of the women affected will be cured. Bevacizumab is a humanized monoclonal antibody, which blocks VEGF signaling in cancer, inhibits angiogenesis and causes tumor shrinkage, and has been recently approved by FDA as a monotherapy for advanced ovarian cancer in combination with chemotherapy. Considering the cost, potential toxicity, and finding that only a portion of patients will benefit from these drugs, the identification of new predictive method for the treatment of ovarian cancer remains an urgent unmet medical need. In this study, we develop weakly supervised deep learning approaches to accurately predict therapeutic effect for bevacizumab of ovarian cancer patients from histopathological hematoxylin and eosin stained whole slide images, without any pathologist-provided locally annotated regions. To the authors' best knowledge, this is the first model demonstrated to be effective for prediction of the therapeutic effect of patients with epithelial ovarian cancer to bevacizumab. Quantitative evaluation of a whole section dataset shows that the proposed method achieves high accuracy, 0.882 ± 0.06; precision, 0.921 ± 0.04, recall, 0.912 ± 0.03; F-measure, 0.917 ± 0.07 using 5-fold cross validation and outperforms two state-of-the art deep learning approaches Coudray et al. (2018), Campanella et al. (2019). For an independent TMA testing set, the three proposed methods obtain promising results with high recall (sensitivity) 0.946, 0.893 and 0.964, respectively. The results suggest that the proposed method could be useful for guiding treatment by assisting in filtering out patients without positive therapeutic response to suffer from further treatments while keeping patients with positive response in the treatment process. Furthermore, according to the statistical analysis of the Cox Proportional Hazards Model, patients who were predicted to be invalid by the proposed model had a very high risk of cancer recurrence (hazard ratio = 13.727) than patients predicted to be effective with statistical signifcance (p < 0.05).
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan; Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Cheng-Chang Chang
- Department of Gynecology and Obstetrics, Tri-Service General Hospital, Taipei, Taiwan; Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei, Taiwan
| | - Yu-Ching Lee
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Yi-Jia Lin
- Department of Pathology, Tri-Service General Hospital, Taipei, Taiwan; Institute of Pathology and Parasitology, National Defense Medical Center, Taipei, Taiwan
| | - Shih-Chang Lo
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Po-Chao Hsu
- Department of Gynecology and Obstetrics, Tri-Service General Hospital, Taipei, Taiwan; Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei, Taiwan
| | - Yi-An Liou
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Chih-Hung Wang
- Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Tai-Kuang Chao
- Department of Pathology, Tri-Service General Hospital, Taipei, Taiwan; Institute of Pathology and Parasitology, National Defense Medical Center, Taipei, Taiwan.
| |
Collapse
|
49
|
Histopathological image recognition of breast cancer based on three-channel reconstructed color slice feature fusion. Biochem Biophys Res Commun 2022; 619:159-165. [DOI: 10.1016/j.bbrc.2022.06.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 05/22/2022] [Accepted: 06/02/2022] [Indexed: 11/22/2022]
|
50
|
Iqbal S, Qureshi AN. Deep-Hist: Breast cancer diagnosis through histopathological images using convolution neural network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-213158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Breast cancer diagnosis utilizes histopathological images to get best results as per standards. For detailed diagnosis of breast cancer, microscopic analysis is necessary. During analysis, pathologists examine breast cancer tissues under different magnification levels and it takes a long time, can be hampered by human interpretation and requires expertise of different magnifications. A single patient usually requires dozens of such images during examination. Since, labelling the data is a computationally expensive task, it is assumed that the images for all patients have the same label in conventional image-based classification and is not usually tested practically. In this study, we are intending to investigate the significance of machine learning techniques in computer aided diagnostic systems based on analysis of histopathological breast cancer images. Publicly available BreakHis data set containing around 8,000 histopathological images of breast tumours is used for conducting experiments. The recently proposed non-parametric approach is proven to show interesting results when compared in detail with machine learning approaches. Our proposed model ’Deep-Hist’ is magnification independent and achieves > 92.46% accuracy with Stochastic Gradient Descent (SGD) which is better than the pretrained models for image classification. Hence, our approach can be used in processing data for use in research and clinical environments to provide second opinions very close to the experts’ intuition.
Collapse
Affiliation(s)
- Saeed Iqbal
- Faculty of Information Technology, University of Central Punjab, Lahore, Pakistan
| | - Adnan N. Qureshi
- Faculty of Information Technology, University of Central Punjab, Lahore, Pakistan
| |
Collapse
|