1
|
Nakhli R, Rich K, Zhang A, Darbandsari A, Shenasa E, Hadjifaradji A, Thiessen S, Milne K, Jones SJM, McAlpine JN, Nelson BH, Gilks CB, Farahani H, Bashashati A. VOLTA: an enVironment-aware cOntrastive ceLl represenTation leArning for histopathology. Nat Commun 2024; 15:3942. [PMID: 38729933 PMCID: PMC11087497 DOI: 10.1038/s41467-024-48062-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 04/19/2024] [Indexed: 05/12/2024] Open
Abstract
In clinical oncology, many diagnostic tasks rely on the identification of cells in histopathology images. While supervised machine learning techniques necessitate the need for labels, providing manual cell annotations is time-consuming. In this paper, we propose a self-supervised framework (enVironment-aware cOntrastive cell represenTation learning: VOLTA) for cell representation learning in histopathology images using a technique that accounts for the cell's mutual relationship with its environment. We subject our model to extensive experiments on data collected from multiple institutions comprising over 800,000 cells and six cancer types. To showcase the potential of our proposed framework, we apply VOLTA to ovarian and endometrial cancers and demonstrate that our cell representations can be utilized to identify the known histotypes of ovarian cancer and provide insights that link histopathology and molecular subtypes of endometrial cancer. Unlike supervised models, we provide a framework that can empower discoveries without any annotation data, even in situations where sample sizes are limited.
Collapse
Affiliation(s)
- Ramin Nakhli
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Katherine Rich
- Bioinformatics Graduate Program, University of British Columbia, Vancouver, Canada
| | - Allen Zhang
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Amirali Darbandsari
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Elahe Shenasa
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Amir Hadjifaradji
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Sidney Thiessen
- Deeley Research Centre, BC Cancer Agency, Victoria, BC, Canada
| | - Katy Milne
- Deeley Research Centre, BC Cancer Agency, Victoria, BC, Canada
| | - Steven J M Jones
- Canada's Michael Smith Genome Sciences Centre, BC Cancer Research Institute, Vancouver, Canada
- Department of Medical Genetics, University of British Columbia, Vancouver, Canada
| | - Jessica N McAlpine
- Department of Obstetrics and Gynecology, University of British Columbia, Vancouver, BC, Canada
| | - Brad H Nelson
- Deeley Research Centre, BC Cancer Agency, Victoria, BC, Canada
| | - C Blake Gilks
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Hossein Farahani
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.
- Canada's Michael Smith Genome Sciences Centre, BC Cancer Research Institute, Vancouver, Canada.
| |
Collapse
|
2
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
3
|
Jiao J, Xiao X, Li Z. dm-GAN: Distributed multi-latent code inversion enhanced GAN for fast and accurate breast X-ray image automatic generation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:19485-19503. [PMID: 38052611 DOI: 10.3934/mbe.2023863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Breast cancer seriously threatens women's physical and mental health. Mammography is one of the most effective methods for breast cancer diagnosis via artificial intelligence algorithms to identify diverse breast masses. The popular intelligent diagnosis methods require a large amount of breast images for training. However, collecting and labeling many breast images manually is extremely time consuming and inefficient. In this paper, we propose a distributed multi-latent code inversion enhanced Generative Adversarial Network (dm-GAN) for fast, accurate and automatic breast image generation. The proposed dm-GAN takes advantage of the generator and discriminator of the GAN framework to achieve automatic image generation. The new generator in dm-GAN adopts a multi-latent code inverse mapping method to simplify the data fitting process of GAN generation and improve the accuracy of image generation, while a multi-discriminator structure is used to enhance the discrimination accuracy. The experimental results show that the proposed dm-GAN can automatically generate breast images with higher accuracy, up to a higher 1.84 dB Peak Signal-to-Noise Ratio (PSNR) and lower 5.61% Fréchet Inception Distance (FID), as well as 1.38x faster generation than the state-of-the-art.
Collapse
Affiliation(s)
- Jiajia Jiao
- College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
| | - Xiao Xiao
- College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
| | - Zhiyu Li
- Department of Medical Imaging, Shanghai East Hospital, Tongji University School of Medicine, Shanghai 201306, China
| |
Collapse
|
4
|
Xiang H, Shen J, Yan Q, Xu M, Shi X, Zhu X. Multi-scale representation attention based deep multiple instance learning for gigapixel whole slide image analysis. Med Image Anal 2023; 89:102890. [PMID: 37467642 DOI: 10.1016/j.media.2023.102890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/22/2023] [Accepted: 07/03/2023] [Indexed: 07/21/2023]
Abstract
Recently, convolutional neural networks (CNNs) directly using whole slide images (WSIs) for tumor diagnosis and analysis have attracted considerable attention, because they only utilize the slide-level label for model training without any additional annotations. However, it is still a challenging task to directly handle gigapixel WSIs, due to the billions of pixels and intra-variations in each WSI. To overcome this problem, in this paper, we propose a novel end-to-end interpretable deep MIL framework for WSI analysis, by using a two-branch deep neural network and a multi-scale representation attention mechanism to directly extract features from all patches of each WSI. Specifically, we first divide each WSI into bag-, patch- and cell-level images, and then assign the slide-level label to its corresponding bag-level images, so that WSI classification becomes a MIL problem. Additionally, we design a novel multi-scale representation attention mechanism, and embed it into a two-branch deep network to simultaneously mine the bag with a correct label, the significant patches and their cell-level information. Extensive experiments demonstrate the superior performance of the proposed framework over recent state-of-the-art methods, in term of classification accuracy and model interpretability. All source codes are released at: https://github.com/xhangchen/MRAN/.
Collapse
Affiliation(s)
- Hangchen Xiang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Junyi Shen
- Division of Liver Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu, 610044, China
| | - Qingguo Yan
- Department of Pathology Key Laboratory of Resource Biology and Biotechnology in Western China, Ministry of Education, School of Medicine, Northwest University, 229 Taibai North Road, Xi'an 710069, China
| | - Meilian Xu
- School of Electronic Information and Artificial Intelligence, Leshan Normal University, Leshan, 614000, China.
| | - Xiaoshuang Shi
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Xiaofeng Zhu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
5
|
Zheng T, Chen W, Li S, Quan H, Zou M, Zheng S, Zhao Y, Gao X, Cui X. Learning how to detect: A deep reinforcement learning method for whole-slide melanoma histopathology images. Comput Med Imaging Graph 2023; 108:102275. [PMID: 37567046 DOI: 10.1016/j.compmedimag.2023.102275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/18/2023] [Accepted: 07/22/2023] [Indexed: 08/13/2023]
Abstract
Cutaneous melanoma represents one of the most life-threatening malignancies. Histopathological image analysis serves as a vital tool for early melanoma detection. Deep neural network (DNN) models are frequently employed to aid pathologists in enhancing the efficiency and accuracy of diagnoses. However, due to the paucity of well-annotated, high-resolution, whole-slide histopathology image (WSI) datasets, WSIs are typically fragmented into numerous patches during the model training and testing stages. This process disregards the inherent interconnectedness among patches, potentially impeding the models' performance. Additionally, the presence of excess, non-contributing patches extends processing times and introduces substantial computational burdens. To mitigate these issues, we draw inspiration from the clinical decision-making processes of dermatopathologists to propose an innovative, weakly supervised deep reinforcement learning framework, titled Fast medical decision-making in melanoma histopathology images (FastMDP-RL). This framework expedites model inference by reducing the number of irrelevant patches identified within WSIs. FastMDP-RL integrates two DNN-based agents: the search agent (SeAgent) and the decision agent (DeAgent). The SeAgent initiates actions, steered by the image features observed in the current viewing field at various magnifications. Simultaneously, the DeAgent provides labeling probabilities for each patch. We utilize multi-instance learning (MIL) to construct a teacher-guided model (MILTG), serving a dual purpose: rewarding the SeAgent and guiding the DeAgent. Our evaluations were conducted using two melanoma datasets: the publicly accessible TCIA-CM dataset and the proprietary MELSC dataset. Our experimental findings affirm FastMDP-RL's ability to expedite inference and accurately predict WSIs, even in the absence of pixel-level annotations. Moreover, our research investigates the WSI-based interactive environment, encompassing the design of agents, state and reward functions, and feature extractors suitable for melanoma tissue images. This investigation offers valuable insights and references for researchers engaged in related studies. The code is available at: https://github.com/titizheng/FastMDP-RL.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Weixing Chen
- Shenzhen College of Advanced Technology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Mingchen Zou
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Song Zheng
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xinghua Gao
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
6
|
Shiffman S, Rios Piedra EA, Adedeji AO, Ruff CF, Andrews RN, Katavolos P, Liu E, Forster A, Brumm J, Fuji RN, Sullivan R. Analysis of cellularity in H&E-stained rat bone marrow tissue via deep learning. J Pathol Inform 2023; 14:100333. [PMID: 37743975 PMCID: PMC10514468 DOI: 10.1016/j.jpi.2023.100333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 08/19/2023] [Indexed: 09/26/2023] Open
Abstract
Our objective was to develop an automated deep-learning-based method to evaluate cellularity in rat bone marrow hematoxylin and eosin whole slide images for preclinical safety assessment. We trained a shallow CNN for segmenting marrow, 2 Mask R-CNN models for segmenting megakaryocytes (MKCs), and small hematopoietic cells (SHCs), and a SegNet model for segmenting red blood cells. We incorporated the models into a pipeline that identifies and counts MKCs and SHCs in rat bone marrow. We compared cell segmentation and counts that our method generated to those that pathologists generated on 10 slides with a range of cell depletion levels from 10 studies. For SHCs, we compared cell counts that our method generated to counts generated by Cellpose and Stardist. The median Dice and object Dice scores for MKCs using our method vs pathologist consensus and the inter- and intra-pathologist variation were comparable, with overlapping first-third quartile ranges. For SHCs, the median scores were close, with first-third quartile ranges partially overlapping intra-pathologist variation. For SHCs, in comparison to Cellpose and Stardist, counts from our method were closer to pathologist counts, with a smaller 95% limits of agreement range. The performance of the bone marrow analysis pipeline supports its incorporation into routine use as an aid for hematotoxicity assessment by pathologists. The pipeline could help expedite hematotoxicity assessment in preclinical studies and consequently could expedite drug development. The method may enable meta-analysis of rat bone marrow characteristics from future and historical whole slide images and may generate new biological insights from cross-study comparisons.
Collapse
Affiliation(s)
- Smadar Shiffman
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Edgar A. Rios Piedra
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Adeyemi O. Adedeji
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Catherine F. Ruff
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Rachel N. Andrews
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Paula Katavolos
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
- Bristol Myers Squibb, New Brunswick, NJ 08901, USA
| | - Evan Liu
- Genentech Research and Early Development (gRED), Department of Development Sciences Informatics, Genentech Inc, South San Francisco, USA
| | - Ashley Forster
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
- University of Pennsylvania School of Veterinary Medicine, Philadelphia, PA 19104, USA
| | - Jochen Brumm
- Genentech Research and Early Development (gRED), Department of Nonclinical Biostatistics, Genentech Inc, South San Francisco, USA
| | - Reina N. Fuji
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Ruth Sullivan
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| |
Collapse
|
7
|
Brémond-Martin C, Simon-Chane C, Clouchoux C, Histace A. Brain organoid data synthesis and evaluation. Front Neurosci 2023; 17:1220172. [PMID: 37650105 PMCID: PMC10465177 DOI: 10.3389/fnins.2023.1220172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 07/24/2023] [Indexed: 09/01/2023] Open
Abstract
Introduction Datasets containing only few images are common in the biomedical field. This poses a global challenge for the development of robust deep-learning analysis tools, which require a large number of images. Generative Adversarial Networks (GANs) are an increasingly used solution to expand small datasets, specifically in the biomedical domain. However, the validation of synthetic images by metrics is still controversial and psychovisual evaluations are time consuming. Methods We augment a small brain organoid bright-field database of 40 images using several GAN optimizations. We compare these synthetic images to the original dataset using similitude metrcis and we perform an psychovisual evaluation of the 240 images generated. Eight biological experts labeled the full dataset (280 images) as syntetic or natural using a custom-built software. We calculate the error rate per loss optimization as well as the hesitation time. We then compare these results to those provided by the similarity metrics. We test the psychovalidated images in a training step of a segmentation task. Results and discussion Generated images are considered as natural as the original dataset, with no increase of the hesitation time by experts. Experts are particularly misled by perceptual and Wasserstein loss optimization. These optimizations render the most qualitative and similar images according to metrics to the original dataset. We do not observe a strong correlation but links between some metrics and psychovisual decision according to the kind of generation. Particular Blur metric combinations could maybe replace the psychovisual evaluation. Segmentation task which use the most psychovalidated images are the most accurate.
Collapse
Affiliation(s)
- Clara Brémond-Martin
- ETIS Laboratory UMR 8051 (CY Cergy Paris Université, ENSEA, CNRS), Cergy, France
- Witsee, Neoxia, Paris, France
| | - Camille Simon-Chane
- ETIS Laboratory UMR 8051 (CY Cergy Paris Université, ENSEA, CNRS), Cergy, France
| | | | - Aymeric Histace
- ETIS Laboratory UMR 8051 (CY Cergy Paris Université, ENSEA, CNRS), Cergy, France
| |
Collapse
|
8
|
Ke J, Shen Y, Lu Y, Guo Y, Shen D. Mine local homogeneous representation by interaction information clustering with unsupervised learning in histopathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 235:107520. [PMID: 37031665 DOI: 10.1016/j.cmpb.2023.107520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 03/13/2023] [Accepted: 03/28/2023] [Indexed: 05/08/2023]
Abstract
BACKGROUND AND OBJECTIVE The success of data-driven deep learning for histopathology images often depends on high-quality training sets and fine-grained annotations. However, as tumors are heterogeneous and annotations are expensive, unsupervised learning approaches are desirable to obtain full automation. METHODS In this paper, an Interaction Information Clustering (IIC) method is proposed to extract locally homogeneous features in mutually exclusive clusters. Trained in an unsupervised paradigm, the framework learns invariant information from multiple spatially adjacent regions for improved classification. Additionally, an adaptive Conditional Random Field (CRF) model is incorporated to detect spatially adjacent image patches of high morphological homogeneity in an offset-constraint free manner. RESULTS Empirically, the proposed model achieves an observable improvement of 11.4% on the downstream patch-level classification accuracy, compared with state-of-the-art unsupervised learning approaches. CONCLUSION Furthermore, evaluated with our clinically collected histopathology whole-slide images, the proposed model shows high consistency in tissue distribution compared with well-trained supervised learning, which is of important diagnostic significance in clinical practice.
Collapse
Affiliation(s)
- Jing Ke
- School of Electronic Information and Electrical Engineering, Shanghai 200240, China.
| | - Yiqing Shen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Yizhou Lu
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China
| | - Yi Guo
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Penrith, NSW 2751, Australia
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200230, China; Shanghai Clinical Research and Trial Center, Shanghai, 201210, China
| |
Collapse
|
9
|
Li Y, Shi X, Yang L, Pu C, Tan Q, Yang Z, Huang H. MC-GAT: multi-layer collaborative generative adversarial transformer for cholangiocarcinoma classification from hyperspectral pathological images. BIOMEDICAL OPTICS EXPRESS 2022; 13:5794-5812. [PMID: 36733731 PMCID: PMC9872896 DOI: 10.1364/boe.472106] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 09/24/2022] [Accepted: 10/01/2022] [Indexed: 06/18/2023]
Abstract
Accurate histopathological analysis is the core step of early diagnosis of cholangiocarcinoma (CCA). Compared with color pathological images, hyperspectral pathological images have advantages for providing rich band information. Existing algorithms of HSI classification are dominated by convolutional neural network (CNN), which has the deficiency of distorting spectral sequence information of HSI data. Although vision transformer (ViT) alleviates this problem to a certain extent, the expressive power of transformer encoder will gradually decrease with increasing number of layers, which still degrades the classification performance. In addition, labeled HSI samples are limited in practical applications, which restricts the performance of methods. To address these issues, this paper proposed a multi-layer collaborative generative adversarial transformer termed MC-GAT for CCA classification from hyperspectral pathological images. MC-GAT consists of two pure transformer-based neural networks including a generator and a discriminator. The generator learns the implicit probability of real samples and transforms noise sequences into band sequences, which produces fake samples. These fake samples and corresponding real samples are mixed together as input to confuse the discriminator, which increases model generalization. In discriminator, a multi-layer collaborative transformer encoder is designed to integrate output features from different layers into collaborative features, which adaptively mines progressive relations from shallow to deep encoders and enhances the discriminating power of the discriminator. Experimental results on the Multidimensional Choledoch Datasets demonstrate that the proposed MC-GAT can achieve better classification results than many state-of-the-art methods. This confirms the potentiality of the proposed method in aiding pathologists in CCA histopathological analysis from hyperspectral imagery.
Collapse
Affiliation(s)
- Yuan Li
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Xu Shi
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Liping Yang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Chunyu Pu
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Qijuan Tan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Zhengchun Yang
- Department of ultrasound, Chongqing Health Center for Women and Children, Chongqing 401147, China
- Department of ultrasound, Women and Children's Hospital of Chongqing Medical University, Chongqing 401147, China
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| |
Collapse
|
10
|
Yu B, Chen H, Zhang Y, Cong L, Pang S, Zhou H, Wang Z, Cong X. Data and knowledge co-driving for cancer subtype classification on multi-scale histopathological slides. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.110168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
11
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
12
|
Infection of lung megakaryocytes and platelets by SARS-CoV-2 anticipate fatal COVID-19. Cell Mol Life Sci 2022; 79:365. [PMID: 35708858 PMCID: PMC9201269 DOI: 10.1007/s00018-022-04318-x] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Revised: 04/01/2022] [Accepted: 04/19/2022] [Indexed: 12/11/2022]
Abstract
SARS-CoV-2, although not being a circulatory virus, spread from the respiratory tract resulting in multiorgan failures and thrombotic complications, the hallmarks of fatal COVID-19. A convergent contributor could be platelets that beyond hemostatic functions can carry infectious viruses. Here, we profiled 52 patients with severe COVID-19 and demonstrated that circulating platelets of 19 out 20 non-survivor patients contain SARS-CoV-2 in robust correlation with fatal outcome. Platelets containing SARS-CoV-2 might originate from bone marrow and lung megakaryocytes (MKs), the platelet precursors, which were found infected by SARS-CoV-2 in COVID-19 autopsies. Accordingly, MKs undergoing shortened differentiation and expressing anti-viral IFITM1 and IFITM3 RNA as a sign of viral sensing were enriched in the circulation of deadly COVID-19. Infected MKs reach the lung concomitant with a specific MK-related cytokine storm rich in VEGF, PDGF and inflammatory molecules, anticipating fatal outcome. Lung macrophages capture SARS-CoV-2-containing platelets in vivo. The virus contained by platelets is infectious as capture of platelets carrying SARS-CoV-2 propagates infection to macrophages in vitro, in a process blocked by an anti-GPIIbIIIa drug. Altogether, platelets containing infectious SARS-CoV-2 alter COVID-19 pathogenesis and provide a powerful fatality marker. Clinical targeting of platelets might prevent viral spread, thrombus formation and exacerbated inflammation at once and increase survival in COVID-19.
Collapse
|
13
|
Liang S, Lu H, Zang M, Wang X, Jiao Y, Zhao T, Xu EY, Xu J. Deep SED-Net with interactive learning for multiple testicular cell types segmentation and cell composition analysis in mouse seminiferous tubules. Cytometry A 2022; 101:658-674. [PMID: 35388957 DOI: 10.1002/cyto.a.24556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/05/2022] [Accepted: 04/01/2022] [Indexed: 11/06/2022]
Abstract
The development of mouse spermatozoa is a continuous process from spermatogonia, spermatocytes, spermatids to mature sperm. Those developing germ cells (spermatogonia, spermatocyte, spermatids) together with supporting Sertoli cells are all enclosed inside seminiferous tubules of the testis, their identification is key to testis histology and pathology analysis. Automated segmentation of all these cells is a challenging task because of their dynamical changes in different stages. The accurate segmentation of testicular cells is critical in developing computerized spermatogenesis staging. In this paper, we present a novel segmentation model, SED-Net, which incorporates a Squeeze-and-Excitation (SE) module and a Dense unit. The SE module optimizes and obtains features from different channels, whereas the Dense unit uses fewer parameters to enhance the use of features. A human-in-the-loop strategy, named deep interactive learning, is developed to achieve better segmentation performance while reducing the workload of manual annotation and time consumption. Across a cohort of 274 seminiferous tubules from Stages VI to VIII, the SED-Net achieved a pixel accuracy of 0.930, a mean pixel accuracy of 0.866, a mean intersection over union of 0.710, and a frequency weighted intersection over union of 0.878, respectively, in terms of four types of testicular cell segmentation. There is no significant difference between manual annotated tubules and segmentation results by SED-Net in cell composition analysis for tubules from Stages VI to VIII. In addition, we performed cell composition analysis on 2346 segmented seminiferous tubule images from 12 segmented testicular section results. The results provided quantitation of cells of various testicular cell types across 12 stages. The rule reflects the cell variation tendency across 12 stages during development of mouse spermatozoa. The method could enable us to not only analyze cell morphology and staging during the development of mouse spermatozoa but also potientially could be applied to the study of reproductive diseases such as infertility.
Collapse
Affiliation(s)
- Shi Liang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Haoda Lu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Min Zang
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China
| | - Xiangxue Wang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Yiping Jiao
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Tingting Zhao
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China
| | - Eugene Yujun Xu
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China.,Department of Neurology, Center for Reproductive Sciences, Northwestern University Feinberg School of Medicine, IL, USA
| | - Jun Xu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| |
Collapse
|
14
|
Ciga O, Xu T, Martel AL. Self supervised contrastive learning for digital histopathology. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2021.100198] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
|
15
|
Synthesis of Microscopic Cell Images Obtained from Bone Marrow Aspirate Smears through Generative Adversarial Networks. BIOLOGY 2022; 11:biology11020276. [PMID: 35205142 PMCID: PMC8869175 DOI: 10.3390/biology11020276] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 01/26/2022] [Accepted: 02/01/2022] [Indexed: 02/07/2023]
Abstract
Simple Summary This paper proposes a hybrid generative adversarial networks model—WGAN-GP-AC—to generate synthetic microscopic cell images. We generate the synthetic data for the cell types containing fewer data to obtain a balanced dataset. A balanced dataset would help enhance the classification accuracy of each cell type and help with an easy and quick diagnosis that is critical for leukemia patients. In this work, we combine images from three datasets to form a single concrete dataset with variations of multiple microscopic cell images. We provide experimental results that prove the correlation between the original and our synthetically generated data. We also deliver classification results to showcase that the generated synthetic data can be used for real-life experiments and the advancement of the medical domain. Abstract Every year approximately 1.24 million people are diagnosed with blood cancer. While the rate increases each year, the availability of data for each kind of blood cancer remains scarce. It is essential to produce enough data for each blood cell type obtained from bone marrow aspirate smears to diagnose rare types of cancer. Generating data would help easy and quick diagnosis, which are the most critical factors in cancer. Generative adversarial networks (GAN) are the latest emerging framework for generating synthetic images and time-series data. This paper takes microscopic cell images, preprocesses them, and uses a hybrid GAN architecture to generate synthetic images of the cell types containing fewer data. We prepared a single dataset with expert intervention by combining images from three different sources. The final dataset consists of 12 cell types and has 33,177 microscopic cell images. We use the discriminator architecture of auxiliary classifier GAN (AC-GAN) and combine it with the Wasserstein GAN with gradient penalty model (WGAN-GP). We name our model as WGAN-GP-AC. The discriminator in our proposed model works to identify real and generated images and classify every image with a cell type. We provide experimental results demonstrating that our proposed model performs better than existing individual and hybrid GAN models in generating microscopic cell images. We use the generated synthetic data with classification models, and the results prove that the classification rate increases significantly. Classification models achieved 0.95 precision and 0.96 recall value for synthetic data, which is higher than the original, augmented, or combined datasets.
Collapse
|
16
|
Lin E, Lin CH, Lane HY. De Novo Peptide and Protein Design Using Generative Adversarial Networks: An Update. J Chem Inf Model 2022; 62:761-774. [DOI: 10.1021/acs.jcim.1c01361] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Eugene Lin
- Department of Biostatistics, University of Washington, Seattle, Washington 98195, United States
- Department of Electrical & Computer Engineering, University of Washington, Seattle, Washington 98195, United States
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan
| | - Chieh-Hsin Lin
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan
- Department of Psychiatry, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 83301, Taiwan
- School of Medicine, Chang Gung University, Taoyuan 33302, Taiwan
| | - Hsien-Yuan Lane
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan
- Department of Psychiatry, China Medical University Hospital, Taichung 40447, Taiwan
- Brain Disease Research Center, China Medical University Hospital, Taichung 40447, Taiwan
- Department of Psychology, College of Medical and Health Sciences, Asia University, Taichung 41354, Taiwan
| |
Collapse
|
17
|
Jose L, Liu S, Russo C, Nadort A, Ieva AD. Generative Adversarial Networks in Digital Pathology and Histopathological Image Processing: A Review. J Pathol Inform 2021; 12:43. [PMID: 34881098 PMCID: PMC8609288 DOI: 10.4103/jpi.jpi_103_20] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/03/2021] [Accepted: 04/23/2021] [Indexed: 12/13/2022] Open
Abstract
Digital pathology is gaining prominence among the researchers with developments
in advanced imaging modalities and new technologies. Generative adversarial
networks (GANs) are a recent development in the field of artificial intelligence
and since their inception, have boosted considerable interest in digital
pathology. GANs and their extensions have opened several ways to tackle many
challenging histopathological image processing problems such as color
normalization, virtual staining, ink removal, image enhancement, automatic
feature extraction, segmentation of nuclei, domain adaptation and data
augmentation. This paper reviews recent advances in histopathological image
processing using GANs with special emphasis on the future perspectives related
to the use of such a technique. The papers included in this review were
retrieved by conducting a keyword search on Google Scholar and manually
selecting the papers on the subject of H&E stained digital pathology
images for histopathological image processing. In the first part, we describe
recent literature that use GANs in various image preprocessing tasks such as
stain normalization, virtual staining, image enhancement, ink removal, and data
augmentation. In the second part, we describe literature that use GANs for image
analysis, such as nuclei detection, segmentation, and feature extraction. This
review illustrates the role of GANs in digital pathology with the objective to
trigger new research on the application of generative models in future research
in digital pathology informatics.
Collapse
Affiliation(s)
- Laya Jose
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia.,ARC Centre of Excellence for Nanoscale Biophotonics, Macquarie University, Sydney, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia.,Australian Institute of Health Innovation, Centre for Health Informatics, Macquarie University, Sydney, Australia
| | - Carlo Russo
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Annemarie Nadort
- ARC Centre of Excellence for Nanoscale Biophotonics, Macquarie University, Sydney, Australia.,Department of Physics and Astronomy, Faculty of Science and Engineering, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
18
|
Wang CW, Huang SC, Lee YC, Shen YJ, Meng SI, Gaol JL. Deep learning for bone marrow cell detection and classification on whole-slide images. Med Image Anal 2021; 75:102270. [PMID: 34710655 DOI: 10.1016/j.media.2021.102270] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 10/06/2021] [Accepted: 10/13/2021] [Indexed: 12/19/2022]
Abstract
Bone marrow (BM) examination is an essential step in both diagnosing and managing numerous hematologic disorders. BM nucleated differential count (NDC) analysis, as part of BM examination, holds the most fundamental and crucial information. However, there are many challenges to perform automated BM NDC analysis on whole-slide images (WSIs), including large dimensions of data to process, complicated cell types with subtle differences. To the authors best knowledge, this is the first study on fully automatic BM NDC using WSIs with 40x objective magnification, which can replace traditional manual counting relying on light microscopy via oil-immersion 100x objective lens with a total 1000x magnification. In this study, we develop an efficient and fully automatic hierarchical deep learning framework for BM NDC WSI analysis in seconds. The proposed hierarchical framework consists of (1) a deep learning model for rapid localization of BM particles and cellular trails generating regions of interest (ROI) for further analysis, (2) a patch-based deep learning model for cell identification of 16 cell types, including megakaryocytes, mitotic cells, and four stages of erythroblasts which have not been demonstrated in previous studies before, and (3) a fast stitching model for integrating patch-based results and producing final outputs. In evaluation, the proposed method is firstly tested on a dataset with a total of 12,426 annotated cells using cross validation, achieving high recall and accuracy of 0.905 ± 0.078 and 0.989 ± 0.006, respectively, and taking only 44 seconds to perform BM NDC analysis for a WSI. To further examine the generalizability of our model, we conduct an evaluation on the second independent dataset with a total of 3005 cells, and the results show that the proposed method also obtains high recall and accuracy of 0.842 and 0.988, respectively. In comparison with the existing small-image-based benchmark methods, the proposed method demonstrates superior performance in recall, accuracy and computational time.
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, 106, Taiwan; Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, 106, Taiwan.
| | - Sheng-Chuan Huang
- Department of Laboratory Medicine, National Taiwan University Hospital, Taipei, 100, Taiwan; Department of Hematology and Oncology, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien, Taiwan; Department of Clinical Pathology, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien, Taiwan
| | - Yu-Ching Lee
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, 106, Taiwan
| | - Yu-Jie Shen
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, 106, Taiwan
| | - Shwu-Ing Meng
- Department of Laboratory Medicine, National Taiwan University Hospital, Taipei, 100, Taiwan
| | - Jeff L Gaol
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, 106, Taiwan
| |
Collapse
|
19
|
Applying Self-Supervised Learning to Medicine: Review of the State of the Art and Medical Implementations. INFORMATICS 2021. [DOI: 10.3390/informatics8030059] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Machine learning has become an increasingly ubiquitous technology, as big data continues to inform and influence everyday life and decision-making. Currently, in medicine and healthcare, as well as in most other industries, the two most prevalent machine learning paradigms are supervised learning and transfer learning. Both practices rely on large-scale, manually annotated datasets to train increasingly complex models. However, the requirement of data to be manually labeled leaves an excess of unused, unlabeled data available in both public and private data repositories. Self-supervised learning (SSL) is a growing area of machine learning that can take advantage of unlabeled data. Contrary to other machine learning paradigms, SSL algorithms create artificial supervisory signals from unlabeled data and pretrain algorithms on these signals. The aim of this review is two-fold: firstly, we provide a formal definition of SSL, divide SSL algorithms into their four unique subsets, and review the state of the art published in each of those subsets between the years of 2014 and 2020. Second, this work surveys recent SSL algorithms published in healthcare, in order to provide medical experts with a clearer picture of how they can integrate SSL into their research, with the objective of leveraging unlabeled data.
Collapse
|
20
|
Li Z, Zhang J, Li B, Gu X, Luo X. COVID-19 diagnosis on CT scan images using a generative adversarial network and concatenated feature pyramid network with an attention mechanism. Med Phys 2021; 48:4334-4349. [PMID: 34117783 PMCID: PMC8420535 DOI: 10.1002/mp.15044] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 05/14/2021] [Accepted: 06/01/2021] [Indexed: 01/04/2023] Open
Abstract
OBJECTIVE Coronavirus disease 2019 (COVID-19) has caused hundreds of thousands of infections and deaths. Efficient diagnostic methods could help curb its global spread. The purpose of this study was to develop and evaluate a method for accurately diagnosing COVID-19 based on computed tomography (CT) scans in real time. METHODS We propose an architecture named "concatenated feature pyramid network" ("Concat-FPN") with an attention mechanism, by concatenating feature maps of multiple. The proposed architecture is then used to form two networks, which we call COVID-CT-GAN and COVID-CT-DenseNet, the former for data augmentation and the latter for data classification. RESULTS The proposed method is evaluated on 3 different numbers of magnitude of COVID-19 CT datasets. Compared with the method without GANs for data augmentation or the original network auxiliary classifier generative adversarial network, COVID-CT-GAN increases the accuracy by 2% to 3%, the recall by 2% to 4%, the precision by 1% to 3%, the F1-score by 1% to 3%, and the area under the curve by 1% to 4%. Compared with the original network DenseNet-201, COVID-CT-DenseNet increases the accuracy by 1% to 3%, the recall by 4% to 9%, the precision by 1%, the F1-score by 1% to 3%, and the area under the curve by 2%. CONCLUSION The experimental results show that our method improves the efficiency of diagnosing COVID-19 on CT images, and helps overcome the problem of limited training data when using deep learning methods to diagnose COVID-19. SIGNIFICANCE Our method can help clinicians build deep learning models using their private datasets to achieve automatic diagnosis of COVID-19 with a high precision.
Collapse
Affiliation(s)
- Zonggui Li
- School of Information Science and EngineeringYunnan UniversityKunmingChina
| | - Junhua Zhang
- School of Information Science and EngineeringYunnan UniversityKunmingChina
| | - Bo Li
- School of Information Science and EngineeringYunnan UniversityKunmingChina
| | - Xiaoying Gu
- School of Information Science and EngineeringYunnan UniversityKunmingChina
| | - Xudong Luo
- School of Information Science and EngineeringYunnan UniversityKunmingChina
| |
Collapse
|
21
|
Yu H, Zhang X, Song L, Jiang L, Huang X, Chen W, Zhang C, Li J, Yang J, Hu Z, Duan Q, Chen W, He X, Fan J, Jiang W, Zhang L, Qiu C, Gu M, Sun W, Zhang Y, Peng G, Shen W, Fu G. Large-scale gastric cancer screening and localization using multi-task deep neural network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
22
|
Gong M, Chen S, Chen Q, Zeng Y, Zhang Y. Generative Adversarial Networks in Medical Image Processing. Curr Pharm Des 2021; 27:1856-1868. [PMID: 33238866 DOI: 10.2174/1381612826666201125110710] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 10/14/2020] [Accepted: 10/21/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND The emergence of generative adversarial networks (GANs) has provided new technology and framework for the application of medical images. Specifically, a GAN requires little to no labeled data to obtain high-quality data that can be generated through competition between the generator and discriminator networks. Therefore, GANs are rapidly proving to be a state-of-the-art foundation, achieving enhanced performances in various medical applications. METHODS In this article, we introduce the principles of GANs and their various variants, deep convolutional GAN, conditional GAN, Wasserstein GAN, Info-GAN, boundary equilibrium GAN, and cycle-GAN. RESULTS All various GANs have found success in medical imaging tasks, including medical image enhancement, segmentation, classification, reconstruction, and synthesis. Furthermore, we summarize the data processing methods and evaluation indicators. Finally, we note the limitations of existing methods and the existing challenges that need to be addressed in this field. CONCLUSION Although GANs are in the initial stage of development in medical image processing, it will have a great prospect in the future.
Collapse
Affiliation(s)
- Meiqin Gong
- West China Second University Hospital, Sichuan University, Chengdu 610041, China
| | - Siyu Chen
- School of Computer Science, Chengdu University of Information Technology, Chengdu 610225, China
| | - Qingyuan Chen
- School of Computer Science, Chengdu University of Information Technology, Chengdu 610225, China
| | - Yuanqi Zeng
- School of Computer Science, Chengdu University of Information Technology, Chengdu 610225, China
| | - Yongqing Zhang
- School of Computer Science, Chengdu University of Information Technology, Chengdu 610225, China
| |
Collapse
|
23
|
Zhu X, Li X, Ong K, Zhang W, Li W, Li L, Young D, Su Y, Shang B, Peng L, Xiong W, Liu Y, Liao W, Xu J, Wang F, Liao Q, Li S, Liao M, Li Y, Rao L, Lin J, Shi J, You Z, Zhong W, Liang X, Han H, Zhang Y, Tang N, Hu A, Gao H, Cheng Z, Liang L, Yu W, Ding Y. Hybrid AI-assistive diagnostic model permits rapid TBS classification of cervical liquid-based thin-layer cell smears. Nat Commun 2021; 12:3541. [PMID: 34112790 PMCID: PMC8192526 DOI: 10.1038/s41467-021-23913-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 05/24/2021] [Indexed: 02/05/2023] Open
Abstract
Technical advancements significantly improve earlier diagnosis of cervical cancer, but accurate diagnosis is still difficult due to various factors. We develop an artificial intelligence assistive diagnostic solution, AIATBS, to improve cervical liquid-based thin-layer cell smear diagnosis according to clinical TBS criteria. We train AIATBS with >81,000 retrospective samples. It integrates YOLOv3 for target detection, Xception and Patch-based models to boost target classification, and U-net for nucleus segmentation. We integrate XGBoost and a logical decision tree with these models to optimize the parameters given by the learning process, and we develop a complete cervical liquid-based cytology smear TBS diagnostic system which also includes a quality control solution. We validate the optimized system with >34,000 multicenter prospective samples and achieve better sensitivity compared to senior cytologists, yet retain high specificity while achieving a speed of <180s/slide. Our system is adaptive to sample preparation using different standards, staining protocols and scanners.
Collapse
Affiliation(s)
- Xiaohui Zhu
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Xiaoming Li
- Department of Pathology, Shenzhen Bao'an People's Hospital (group), Shenzhen, Guangdong Province, PR China
| | - Kokhaur Ong
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
- Bioinformatics Institute, A*STAR, Singapore, Singapore
| | - Wenli Zhang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Wencai Li
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, PR China
| | - Longjie Li
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - David Young
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - Yongjian Su
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Bin Shang
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Linggan Peng
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Wei Xiong
- Guangzhou Kaipu Biotechnology Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Yunke Liu
- Laboratory Department, Guangzhou Tianhe District Maternal and Child Health Care Hospital, Guangzhou, Guangdong Province, PR China
| | - Wenting Liao
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, Guangdong Province, PR China
| | - Jingjing Xu
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, PR China
| | - Feifei Wang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Qing Liao
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Shengnan Li
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Minmin Liao
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Yu Li
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Linshang Rao
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Jinquan Lin
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Jianyuan Shi
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Zejun You
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Wenlong Zhong
- Guangzhou Huayin medical inspection center Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Xinrong Liang
- Guangzhou Huayin medical inspection center Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Hao Han
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - Yan Zhang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Department of Pathology, Shenzhen Longhua District Maternity & Child Healthcare Hospital, Shenzhen, PR China
| | - Na Tang
- Department of Pathology, Shenzhen First People's Hospital, Shenzhen, Guangdong Province, PR China
| | - Aixia Hu
- Department of Pathology, Henan Provincial People's Hospital, Zhengzhou, Henan Province, PR China
| | - Hongyi Gao
- Department of Pathology, Guangdong Provincial Women's and Children's Dispensary, Shenzhen, Guangdong Province, PR China
| | - Zhiqiang Cheng
- Department of Pathology, Shenzhen First People's Hospital, Shenzhen, Guangdong Province, PR China.
| | - Li Liang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China.
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China.
| | - Weimiao Yu
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore.
- Bioinformatics Institute, A*STAR, Singapore, Singapore.
| | - Yanqing Ding
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China.
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China.
| |
Collapse
|
24
|
Saha M, Guo X, Sharma A. TilGAN: GAN for Facilitating Tumor-Infiltrating Lymphocyte Pathology Image Synthesis With Improved Image Classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:79829-79840. [PMID: 34178560 PMCID: PMC8224465 DOI: 10.1109/access.2021.3084597] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Tumor-infiltrating lymphocytes (TILs) act as immune cells against cancer tissues. The manual assessment of TILs is usually erroneous, tedious, costly and subject to inter- and intraobserver variability. Machine learning approaches can solve these issues, but they require a large amount of labeled data for model training, which is expensive and not readily available. In this study, we present an efficient generative adversarial network, TilGAN, to generate high-quality synthetic pathology images followed by classification of TIL and non-TIL regions. Our proposed architecture is constructed with a generator network and a discriminator network. The novelty exists in the TilGAN architecture, loss functions, and evaluation techniques. Our TilGAN-generated images achieved a higher Inception score than the real images (2.90 vs. 2.32, respectively). They also achieved a lower kernel Inception distance (1.44) and a lower Fréchet Inception distance (0.312). It also passed the Turing test performed by experienced pathologists and clinicians. We further extended our evaluation studies and used almost one million synthetic data, generated by TilGAN, to train a classification model. Our proposed classification model achieved a 97.83% accuracy, a 97.37% F1-score, and a 97% area under the curve. Our extensive experiments and superior outcomes show the efficiency and effectiveness of our proposed TilGAN architecture. This architecture can also be used for other types of images for image synthesis.
Collapse
Affiliation(s)
- Monjoy Saha
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA
| | - Xiaoyuan Guo
- Department of Computer Science, Emory University, Atlanta, GA 30332, USA
| | - Ashish Sharma
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
25
|
Lin H, Chen H, Wang X, Wang Q, Wang L, Heng PA. Dual-path network with synergistic grouping loss and evidence driven risk stratification for whole slide cervical image analysis. Med Image Anal 2021; 69:101955. [PMID: 33588122 DOI: 10.1016/j.media.2021.101955] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 12/28/2020] [Accepted: 01/02/2021] [Indexed: 12/26/2022]
Abstract
Cervical cancer has been one of the most lethal cancers threatening women's health. Nevertheless, the incidence of cervical cancer can be effectively minimized with preventive clinical management strategies, including vaccines and regular screening examinations. Screening cervical smears under microscope by cytologist is a widely used routine in regular examination, which consumes cytologists' large amount of time and labour. Computerized cytology analysis appropriately caters to such an imperative need, which alleviates cytologists' workload and reduce potential misdiagnosis rate. However, automatic analysis of cervical smear via digitalized whole slide images (WSIs) remains a challenging problem, due to the extreme huge image resolution, existence of tiny lesions, noisy dataset and intricate clinical definition of classes with fuzzy boundaries. In this paper, we design an efficient deep convolutional neural network (CNN) with dual-path (DP) encoder for lesion retrieval, which ensures the inference efficiency and the sensitivity on both tiny and large lesions. Incorporated with synergistic grouping loss (SGL), the network can be effectively trained on noisy dataset with fuzzy inter-class boundaries. Inspired by the clinical diagnostic criteria from the cytologists, a novel smear-level classifier, i.e., rule-based risk stratification (RRS), is proposed for accurate smear-level classification and risk stratification, which aligns reasonably with intricate cytological definition of the classes. Extensive experiments on the largest dataset including 19,303 WSIs from multiple medical centers validate the robustness of our method. With high sensitivity of 0.907 and specificity of 0.80 being achieved, our method manifests the potential to reduce the workload for cytologists in the routine practice.
Collapse
Affiliation(s)
- Huangjing Lin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Xi Wang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qiong Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| | - Liansheng Wang
- Department of Computer Science, Xiamen University, Xiamen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
26
|
Learning to segment images with classification labels. Med Image Anal 2021; 68:101912. [DOI: 10.1016/j.media.2020.101912] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2019] [Revised: 11/03/2020] [Accepted: 11/13/2020] [Indexed: 11/20/2022]
|
27
|
Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Med Image Anal 2021; 67:101813. [PMID: 33049577 PMCID: PMC7725956 DOI: 10.1016/j.media.2020.101813] [Citation(s) in RCA: 193] [Impact Index Per Article: 64.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 05/12/2020] [Accepted: 08/09/2020] [Indexed: 12/14/2022]
Abstract
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
Collapse
Affiliation(s)
- Chetan L Srinidhi
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada.
| | - Ozan Ciga
- Department of Medical Biophysics, University of Toronto, Canada
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada
| |
Collapse
|
28
|
Shi J, Wang R, Zheng Y, Jiang Z, Zhang H, Yu L. Cervical cell classification with graph convolutional network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105807. [PMID: 33130497 DOI: 10.1016/j.cmpb.2020.105807] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 10/12/2020] [Indexed: 05/26/2023]
Abstract
BACKGROUND AND OBJECTIVE Cervical cell classification has important clinical significance in cervical cancer screening at early stages. In contrast with the conventional classification methods which depend on hand-crafted or engineered features, Convolutional Neural Network (CNN) generally classifies cervical cells via learned deep features. However, the latent correlations of images may be ignored during CNN feature learning and thus influence the representation ability of CNN features. METHODS We propose a novel cervical cell classification method based on Graph Convolutional Network (GCN). It aims to explore the potential relationship of cervical cell images for improving the classification performance. The CNN features of all the cervical cell images are firstly clustered and the intrinsic relationships of images can be preliminarily revealed through the clustering. To further capture the underlying correlations existed among clusters, a graph structure is constructed. GCN is then applied to propagate the node dependencies and thus yield the relation-aware feature representation. The GCN features are finally incorporated to enhance the discriminative ability of CNN features. RESULTS Experiments on the public cervical cell image dataset SIPaKMeD from International Conference on Image Processing in 2018 demonstrate the feasibility and effectiveness of the proposed method. In addition, we introduce a large-scale Motic liquid-based cytology image dataset which provides the large amount of data, some novel cell types with important clinical significance and staining difference and thus presents a great challenge for cervical cell classification. We evaluate the proposed method under two conditions of the consistent staining and different staining. Experimental results show our method outperforms the existing state-of-arts methods according to the quantitative metrics (i.e. accuracy, sensitivity, specificity, F-measure and confusion matrices). CONCLUSIONS The intrinsic relationship exploration of cervical cells contributes significant improvements to the cervical cell classification. The relation-aware features generated by GCN effectively strengthens the representational power of CNN features. The proposed method can achieve the better classification performance and also can be potentially used in automatic screening system of cervical cytology.
Collapse
Affiliation(s)
- Jun Shi
- School of Software, Hefei University of Technology, Hefei 230601, China.
| | - Ruoyu Wang
- School of Software, Hefei University of Technology, Hefei 230601, China.
| | - Yushan Zheng
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China; Beijing Key Laboratory of Digital Media, Beihang University, Beijing, 100191, China.
| | - Zhiguo Jiang
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China; Beijing Key Laboratory of Digital Media, Beihang University, Beijing, 100191, China.
| | - Haopeng Zhang
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China; Beijing Key Laboratory of Digital Media, Beihang University, Beijing, 100191, China.
| | - Lanlan Yu
- Motic (Xiamen) Medical Diagnostic Systems Co. Ltd., Xiamen 361101, China.
| |
Collapse
|
29
|
Tschuchnig ME, Oostingh GJ, Gadermayr M. Generative Adversarial Networks in Digital Pathology: A Survey on Trends and Future Potential. PATTERNS (NEW YORK, N.Y.) 2020; 1:100089. [PMID: 33205132 PMCID: PMC7660380 DOI: 10.1016/j.patter.2020.100089] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Image analysis in the field of digital pathology has recently gained increased popularity. The use of high-quality whole-slide scanners enables the fast acquisition of large amounts of image data, showing extensive context and microscopic detail at the same time. Simultaneously, novel machine-learning algorithms have boosted the performance of image analysis approaches. In this paper, we focus on a particularly powerful class of architectures, the so-called generative adversarial networks (GANs) applied to histological image data. Besides improving performance, GANs also enable previously intractable application scenarios in this field. However, GANs could exhibit a potential for introducing bias. Hereby, we summarize the recent state-of-the-art developments in a generalizing notation, present the main applications of GANs, and give an outlook of some chosen promising approaches and their possible future applications. In addition, we identify currently unavailable methods with potential for future applications.
Collapse
Affiliation(s)
- Maximilian E. Tschuchnig
- Department of Information Technologies and Systems Management, Salzburg University of Applied Sciences, 5412 Puch bei Hallein, Austria
- Department of Biomedical Sciences, Salzburg University of Applied Sciences, 5412 Puch bei Hallein, Austria
| | - Gertie J. Oostingh
- Department of Biomedical Sciences, Salzburg University of Applied Sciences, 5412 Puch bei Hallein, Austria
| | - Michael Gadermayr
- Department of Information Technologies and Systems Management, Salzburg University of Applied Sciences, 5412 Puch bei Hallein, Austria
| |
Collapse
|
30
|
Lin E, Lin CH, Lane HY. Relevant Applications of Generative Adversarial Networks in Drug Design and Discovery: Molecular De Novo Design, Dimensionality Reduction, and De Novo Peptide and Protein Design. Molecules 2020; 25:E3250. [PMID: 32708785 PMCID: PMC7397124 DOI: 10.3390/molecules25143250] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 07/11/2020] [Accepted: 07/14/2020] [Indexed: 01/16/2023] Open
Abstract
A growing body of evidence now suggests that artificial intelligence and machine learning techniques can serve as an indispensable foundation for the process of drug design and discovery. In light of latest advancements in computing technologies, deep learning algorithms are being created during the development of clinically useful drugs for treatment of a number of diseases. In this review, we focus on the latest developments for three particular arenas in drug design and discovery research using deep learning approaches, such as generative adversarial network (GAN) frameworks. Firstly, we review drug design and discovery studies that leverage various GAN techniques to assess one main application such as molecular de novo design in drug design and discovery. In addition, we describe various GAN models to fulfill the dimension reduction task of single-cell data in the preclinical stage of the drug development pipeline. Furthermore, we depict several studies in de novo peptide and protein design using GAN frameworks. Moreover, we outline the limitations in regard to the previous drug design and discovery studies using GAN models. Finally, we present a discussion of directions and challenges for future research.
Collapse
Affiliation(s)
- Eugene Lin
- Department of Biostatistics, University of Washington, Seattle, WA 98195, USA;
- Department of Electrical & Computer Engineering, University of Washington, Seattle, WA 98195, USA
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan
| | - Chieh-Hsin Lin
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan
- Department of Psychiatry, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 83301, Taiwan
- School of Medicine, Chang Gung University, Taoyuan 33302, Taiwan
| | - Hsien-Yuan Lane
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung 40402, Taiwan
- Department of Psychiatry, China Medical University Hospital, Taichung 40447, Taiwan
- Brain Disease Research Center, China Medical University Hospital, Taichung 40447, Taiwan
- Department of Psychology, College of Medical and Health Sciences, Asia University, Taichung 41354, Taiwan
| |
Collapse
|
31
|
Mudali D, Jeevanandam J, Danquah MK. Probing the characteristics and biofunctional effects of disease-affected cells and drug response via machine learning applications. Crit Rev Biotechnol 2020; 40:951-977. [PMID: 32633615 DOI: 10.1080/07388551.2020.1789062] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Drug-induced transformations in disease characteristics at the cellular and molecular level offers the opportunity to predict and evaluate the efficacy of pharmaceutical ingredients whilst enabling the optimal design of new and improved drugs with enhanced pharmacokinetics and pharmacodynamics. Machine learning is a promising in-silico tool used to simulate cells with specific disease properties and to determine their response toward drug uptake. Differences in the properties of normal and infected cells, including biophysical, biochemical and physiological characteristics, plays a key role in developing fundamental cellular probing platforms for machine learning applications. Cellular features can be extracted periodically from both the drug treated, infected, and normal cells via image segmentations in order to probe dynamic differences in cell behavior. Cellular segmentation can be evaluated to reflect the levels of drug effect on a distinct cell or group of cells via probability scoring. This article provides an account for the use of machine learning methods to probe differences in the biophysical, biochemical and physiological characteristics of infected cells in response to pharmacokinetics uptake of drug ingredients for application in cancer, diabetes and neurodegenerative disease therapies.
Collapse
Affiliation(s)
- Deborah Mudali
- Department of Computer Science, University of Tennessee, Chattanooga, TN, USA
| | - Jaison Jeevanandam
- Department of Chemical Engineering, Faculty of Engineering and Science, Curtin University, Miri, Malaysia
| | - Michael K Danquah
- Chemical Engineering Department, University of Tennessee, Chattanooga, TN, USA
| |
Collapse
|
32
|
Yang X, Lin Y, Wang Z, Li X, Cheng KT. Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential Generative Adversarial Networks. IEEE J Biomed Health Inform 2020; 24:855-865. [DOI: 10.1109/jbhi.2019.2922986] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
33
|
Lin E, Mukherjee S, Kannan S. A deep adversarial variational autoencoder model for dimensionality reduction in single-cell RNA sequencing analysis. BMC Bioinformatics 2020; 21:64. [PMID: 32085701 PMCID: PMC7035735 DOI: 10.1186/s12859-020-3401-5] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 02/07/2020] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Single-cell RNA sequencing (scRNA-seq) is an emerging technology that can assess the function of an individual cell and cell-to-cell variability at the single cell level in an unbiased manner. Dimensionality reduction is an essential first step in downstream analysis of the scRNA-seq data. However, the scRNA-seq data are challenging for traditional methods due to their high dimensional measurements as well as an abundance of dropout events (that is, zero expression measurements). RESULTS To overcome these difficulties, we propose DR-A (Dimensionality Reduction with Adversarial variational autoencoder), a data-driven approach to fulfill the task of dimensionality reduction. DR-A leverages a novel adversarial variational autoencoder-based framework, a variant of generative adversarial networks. DR-A is well-suited for unsupervised learning tasks for the scRNA-seq data, where labels for cell types are costly and often impossible to acquire. Compared with existing methods, DR-A is able to provide a more accurate low dimensional representation of the scRNA-seq data. We illustrate this by utilizing DR-A for clustering of scRNA-seq data. CONCLUSIONS Our results indicate that DR-A significantly enhances clustering performance over state-of-the-art methods.
Collapse
Affiliation(s)
- Eugene Lin
- Department of Electrical & Computer Engineering, University of Washington, Seattle, WA, 98195, USA.,Department of Biostatistics, University of Washington, Seattle, WA, 98195, USA.,Graduate Institute of Biomedical Sciences, China Medical University, Taichung, Taiwan
| | - Sudipto Mukherjee
- Department of Electrical & Computer Engineering, University of Washington, Seattle, WA, 98195, USA
| | - Sreeram Kannan
- Department of Electrical & Computer Engineering, University of Washington, Seattle, WA, 98195, USA.
| |
Collapse
|
34
|
Van Eycke YR, Foucart A, Decaestecker C. Strategies to Reduce the Expert Supervision Required for Deep Learning-Based Segmentation of Histopathological Images. Front Med (Lausanne) 2019; 6:222. [PMID: 31681779 PMCID: PMC6803466 DOI: 10.3389/fmed.2019.00222] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 09/27/2019] [Indexed: 12/21/2022] Open
Abstract
The emergence of computational pathology comes with a demand to extract more and more information from each tissue sample. Such information extraction often requires the segmentation of numerous histological objects (e.g., cell nuclei, glands, etc.) in histological slide images, a task for which deep learning algorithms have demonstrated their effectiveness. However, these algorithms require many training examples to be efficient and robust. For this purpose, pathologists must manually segment hundreds or even thousands of objects in histological images, i.e., a long, tedious and potentially biased task. The present paper aims to review strategies that could help provide the very large number of annotated images needed to automate the segmentation of histological images using deep learning. This review identifies and describes four different approaches: the use of immunohistochemical markers as labels, realistic data augmentation, Generative Adversarial Networks (GAN), and transfer learning. In addition, we describe alternative learning strategies that can use imperfect annotations. Adding real data with high-quality annotations to the training set is a safe way to improve the performance of a well configured deep neural network. However, the present review provides new perspectives through the use of artificially generated data and/or imperfect annotations, in addition to transfer learning opportunities.
Collapse
Affiliation(s)
- Yves-Rémi Van Eycke
- Digital Image Analysis in Pathology (DIAPath), Center for Microscopy and Molecular Imaging (CMMI), Université Libre de Bruxelles, Charleroi, Belgium.,Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| | - Adrien Foucart
- Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| | - Christine Decaestecker
- Digital Image Analysis in Pathology (DIAPath), Center for Microscopy and Molecular Imaging (CMMI), Université Libre de Bruxelles, Charleroi, Belgium.,Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| |
Collapse
|