1
|
Ma J, Chen H. Efficient Supervised Pretraining of Swin-Transformer for Virtual Staining of Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1388-1399. [PMID: 38010933 DOI: 10.1109/tmi.2023.3337253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Fluorescence staining is an important technique in life science for labeling cellular constituents. However, it also suffers from being time-consuming, having difficulty in simultaneous labeling, etc. Thus, virtual staining, which does not rely on chemical labeling, has been introduced. Recently, deep learning models such as transformers have been applied to virtual staining tasks. However, their performance relies on large-scale pretraining, hindering their development in the field. To reduce the reliance on large amounts of computation and data, we construct a Swin-transformer model and propose an efficient supervised pretraining method based on the masked autoencoder (MAE). Specifically, we adopt downsampling and grid sampling to mask 75% of pixels and reduce the number of tokens. The pretraining time of our method is only 1/16 compared with the original MAE. We also design a supervised proxy task to predict stained images with multiple styles instead of masked pixels. Additionally, most virtual staining approaches are based on private datasets and evaluated by different metrics, making a fair comparison difficult. Therefore, we develop a standard benchmark based on three public datasets and build a baseline for the convenience of future researchers. We conduct extensive experiments on three benchmark datasets, and the experimental results show the proposed method achieves the best performance both quantitatively and qualitatively. In addition, ablation studies are conducted, and experimental results illustrate the effectiveness of the proposed pretraining method. The benchmark and code are available at https://github.com/birkhoffkiki/CAS-Transformer.
Collapse
|
2
|
Xu X, Xiao Z, Zhang F, Wang C, Wei B, Wang Y, Cheng B, Jia Y, Li Y, Li B, Guo H, Xu F. CellVisioner: A Generalizable Cell Virtual Staining Toolbox based on Few-Shot Transfer Learning for Mechanobiological Analysis. RESEARCH (WASHINGTON, D.C.) 2023; 6:0285. [PMID: 38434246 PMCID: PMC10907024 DOI: 10.34133/research.0285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 11/16/2023] [Indexed: 03/05/2024]
Abstract
Visualizing cellular structures especially the cytoskeleton and the nucleus is crucial for understanding mechanobiology, but traditional fluorescence staining has inherent limitations such as phototoxicity and photobleaching. Virtual staining techniques provide an alternative approach to addressing these issues but often require substantial amount of user training data. In this study, we develop a generalizable cell virtual staining toolbox (termed CellVisioner) based on few-shot transfer learning that requires substantially reduced user training data. CellVisioner can virtually stain F-actin and nuclei for various types of cells and extract single-cell parameters relevant to mechanobiology research. Taking the label-free single-cell images as input, CellVisioner can predict cell mechanobiological status (e.g., Yes-associated protein nuclear/cytoplasmic ratio) and perform long-term monitoring for living cells. We envision that CellVisioner would be a powerful tool to facilitate on-site mechanobiological research.
Collapse
Affiliation(s)
- Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Zhanfeng Xiao
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Fan Zhang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Changxiang Wang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Bo Wei
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Yaohui Wang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Bo Cheng
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Yuanbo Jia
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Yuan Li
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Bin Li
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Hui Guo
- Department of Medical Oncology,
The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, P.R. China
| | - Feng Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| |
Collapse
|
3
|
Li Y, Zhang Y, Liu JY, Wang K, Zhang K, Zhang GS, Liao XF, Yang G. Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:5826-5839. [PMID: 35984806 DOI: 10.1109/tcyb.2022.3194099] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this article, a global transformer (GT) and dual local attention (DLA) network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the GT is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, DLA, which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deep-shallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework, respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results of diseased images show the robustness of our proposed GT-DLA-dsHFF. Implementation codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF.
Collapse
|
4
|
Xu X, Wang Z, Deng C, Yuan H, Ji S. Towards Improved and Interpretable Deep Metric Learning via Attentive Grouping. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:1189-1200. [PMID: 35180077 DOI: 10.1109/tpami.2022.3152495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Grouping has been commonly used in deep metric learning for computing diverse features. To improve the performance and interpretability, we propose an improved and interpretable grouping method to be integrated flexibly with any metric learning framework. Specifically, our method is based on the attention mechanism with a learnable query for each group. The query is fully trainable and can capture group-specific information when combined with the diversity loss. An appealing property of our method is that it naturally lends itself interpretability. The attention scores between the learnable query and each spatial position can be interpreted as the importance of that position. We formally show that our proposed grouping method is invariant to spatial permutations of features. When used as a module in convolutional neural networks, our method leads to translational invariance. We conduct comprehensive experiments to evaluate our method. Our quantitative results indicate that the proposed method outperforms prior methods consistently and significantly across different datasets, evaluation metrics, base models, and loss functions. For the first time to the best of our knowledge, our interpretation results clearly demonstrate that the proposed method enables the learning of diverse and stable semantic features across groups.
Collapse
|
5
|
Nijiati M, Tuersun A, Zhang Y, Yuan Q, Gong P, Abulizi A, Tuoheti A, Abulaiti A, Zou X. A symmetric prior knowledge based deep learning model for intracerebral hemorrhage lesion segmentation. Front Physiol 2022; 13:977427. [PMID: 36505076 PMCID: PMC9727183 DOI: 10.3389/fphys.2022.977427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 11/11/2022] [Indexed: 11/24/2022] Open
Abstract
Background: Accurate localization and classification of intracerebral hemorrhage (ICH) lesions are of great significance for the treatment and prognosis of patients with ICH. The purpose of this study is to develop a symmetric prior knowledge based deep learning model to segment ICH lesions in computed tomography (CT). Methods: A novel symmetric Transformer network (Sym-TransNet) is designed to segment ICH lesions in CT images. A cohort of 1,157 patients diagnosed with ICH is established to train (n = 857), validate (n = 100), and test (n = 200) the Sym-TransNet. A healthy cohort of 200 subjects is added, establishing a test set with balanced positive and negative cases (n = 400), to further evaluate the accuracy, sensitivity, and specificity of the diagnosis of ICH. The segmentation results are obtained after data pre-processing and Sym-TransNet. The DICE coefficient is used to evaluate the similarity between the segmentation results and the segmentation gold standard. Furthermore, some recent deep learning methods are reproduced to compare with Sym-TransNet, and statistical analysis is performed to prove the statistical significance of the proposed method. Ablation experiments are conducted to prove that each component in Sym-TransNet could effectively improve the DICE coefficient of ICH lesions. Results: For the segmentation of ICH lesions, the DICE coefficient of Sym-TransNet is 0.716 ± 0.031 in the test set which contains 200 CT images of ICH. The DICE coefficients of five subtypes of ICH, including intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), extradural hemorrhage (EDH), subdural hemorrhage (SDH), and subarachnoid hemorrhage (SAH), are 0.784 ± 0.039, 0.680 ± 0.049, 0.359 ± 0.186, 0.534 ± 0.455, and 0.337 ± 0.044, respectively. Statistical results show that the proposed Sym-TransNet can significantly improve the DICE coefficient of ICH lesions in most cases. In addition, the accuracy, sensitivity, and specificity of Sym-TransNet in the diagnosis of ICH in 400 CT images are 91.25%, 98.50%, and 84.00%, respectively. Conclusion: Compared with recent mainstream deep learning methods, the proposed Sym-TransNet can segment and identify different types of lesions from CT images of ICH patients more effectively. Moreover, the Sym-TransNet can diagnose ICH more stably and efficiently, which has clinical application prospects.
Collapse
Affiliation(s)
- Mayidili Nijiati
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | - Abudouresuli Tuersun
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | | | - Qing Yuan
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | | | | | - Awanisa Tuoheti
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | - Adili Abulaiti
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China,*Correspondence: Adili Abulaiti, ; Xiaoguang Zou,
| | - Xiaoguang Zou
- Clinical Medical Research Center, The First People’s Hospital of Kashi Prefecture, Kashi, China,*Correspondence: Adili Abulaiti, ; Xiaoguang Zou,
| |
Collapse
|
6
|
Learned end-to-end high-resolution lensless fiber imaging towards real-time cancer diagnosis. Sci Rep 2022; 12:18846. [PMID: 36344626 PMCID: PMC9640670 DOI: 10.1038/s41598-022-23490-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/01/2022] [Indexed: 11/09/2022] Open
Abstract
Recent advances in label-free histology promise a new era for real-time diagnosis in neurosurgery. Deep learning using autofluorescence is promising for tumor classification without histochemical staining process. The high image resolution and minimally invasive diagnostics with negligible tissue damage is of great importance. The state of the art is raster scanning endoscopes, but the distal lens optics limits the size. Lensless fiber bundle endoscopy offers both small diameters of a few 100 microns and the suitability as single-use probes, which is beneficial in sterilization. The problem is the inherent honeycomb artifacts of coherent fiber bundles (CFB). For the first time, we demonstrate an end-to-end lensless fiber imaging with exploiting the near-field. The framework includes resolution enhancement and classification networks that use single-shot CFB images to provide both high-resolution imaging and tumor diagnosis. The well-trained resolution enhancement network not only recovers high-resolution features beyond the physical limitations of CFB, but also helps improving tumor recognition rate. Especially for glioblastoma, the resolution enhancement network helps increasing the classification accuracy from 90.8 to 95.6%. The novel technique enables histological real-time imaging with lensless fiber endoscopy and is promising for a quick and minimally invasive intraoperative treatment and cancer diagnosis in neurosurgery.
Collapse
|
7
|
Somani A, Ahmed Sekh A, Opstad IS, Birna Birgisdottir Å, Myrmel T, Singh Ahluwalia B, Horsch A, Agarwal K, Prasad DK. Virtual labeling of mitochondria in living cells using correlative imaging and physics-guided deep learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:5495-5516. [PMID: 36425635 PMCID: PMC9664879 DOI: 10.1364/boe.464177] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/24/2022] [Accepted: 09/07/2022] [Indexed: 06/16/2023]
Abstract
Mitochondria play a crucial role in cellular metabolism. This paper presents a novel method to visualize mitochondria in living cells without the use of fluorescent markers. We propose a physics-guided deep learning approach for obtaining virtually labeled micrographs of mitochondria from bright-field images. We integrate a microscope's point spread function in the learning of an adversarial neural network for improving virtual labeling. We show results (average Pearson correlation 0.86) significantly better than what was achieved by state-of-the-art (0.71) for virtual labeling of mitochondria. We also provide new insights into the virtual labeling problem and suggest additional metrics for quality assessment. The results show that our virtual labeling approach is a powerful way of segmenting and tracking individual mitochondria in bright-field images, results previously achievable only for fluorescently labeled mitochondria.
Collapse
Affiliation(s)
- Ayush Somani
- Bio-AI Lab, Department of Computer Science,
UiT The Arctic University of Norway, Tromsø, 9037, Norway
| | - Arif Ahmed Sekh
- Computer Science and Engineering, XIM University, Bhubaneswar, 751002, India
| | - Ida S. Opstad
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, 9037, Norway
| | - Åsa Birna Birgisdottir
- Cardiovascular group, Department of Clinical Medicine, UiT The Arctic University of Norway, Tromsø, 9037, Norway
| | - Truls Myrmel
- Cardiovascular group, Department of Clinical Medicine, UiT The Arctic University of Norway, Tromsø, 9037, Norway
| | | | - Alexander Horsch
- Bio-AI Lab, Department of Computer Science,
UiT The Arctic University of Norway, Tromsø, 9037, Norway
| | - Krishna Agarwal
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, 9037, Norway
| | - Dilip K. Prasad
- Bio-AI Lab, Department of Computer Science,
UiT The Arctic University of Norway, Tromsø, 9037, Norway
| |
Collapse
|
8
|
Label-free prediction of cell painting from brightfield images. Sci Rep 2022; 12:10001. [PMID: 35705591 PMCID: PMC9200748 DOI: 10.1038/s41598-022-12914-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 05/18/2022] [Indexed: 11/08/2022] Open
Abstract
Cell Painting is a high-content image-based assay applied in drug discovery to predict bioactivity, assess toxicity and understand mechanisms of action of chemical and genetic perturbations. We investigate label-free Cell Painting by predicting the five fluorescent Cell Painting channels from brightfield input. We train and validate two deep learning models with a dataset representing 17 batches, and we evaluate on batches treated with compounds from a phenotypic set. The mean Pearson correlation coefficient of the predicted images across all channels is 0.84. Without incorporating features into the model training, we achieved a mean correlation of 0.45 with ground truth features extracted using a segmentation-based feature extraction pipeline. Additionally, we identified 30 features which correlated greater than 0.8 to the ground truth. Toxicity analysis on the label-free Cell Painting resulted a sensitivity of 62.5% and specificity of 99.3% on images from unseen batches. We provide a breakdown of the feature profiles by channel and feature type to understand the potential and limitations of label-free morphological profiling. We demonstrate that label-free Cell Painting has the potential to be used for downstream analyses and could allow for repurposing imaging channels for other non-generic fluorescent stains of more targeted biological interest.
Collapse
|
9
|
Huang S, Li J, Xiao Y, Shen N, Xu T. RTNet: Relation Transformer Network for Diabetic Retinopathy Multi-Lesion Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1596-1607. [PMID: 35041595 DOI: 10.1109/tmi.2022.3143833] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Automatic diabetic retinopathy (DR) lesions segmentation makes great sense of assisting ophthalmologists in diagnosis. Although many researches have been conducted on this task, most prior works paid too much attention to the designs of networks instead of considering the pathological association for lesions. Through investigating the pathogenic causes of DR lesions in advance, we found that certain lesions are closed to specific vessels and present relative patterns to each other. Motivated by the observation, we propose a relation transformer block (RTB) to incorporate attention mechanisms at two main levels: a self-attention transformer exploits global dependencies among lesion features, while a cross-attention transformer allows interactions between lesion and vessel features by integrating valuable vascular information to alleviate ambiguity in lesion detection caused by complex fundus structures. In addition, to capture the small lesion patterns first, we propose a global transformer block (GTB) which preserves detailed information in deep network. By integrating the above blocks of dual-branches, our network segments the four kinds of lesions simultaneously. Comprehensive experiments on IDRiD and DDR datasets well demonstrate the superiority of our approach, which achieves competitive performance compared to state-of-the-arts.
Collapse
|
10
|
Liu Y, Ji S. CleftNet: Augmented Deep Learning for Synaptic Cleft Detection From Brain Electron Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3507-3518. [PMID: 34129494 PMCID: PMC8674103 DOI: 10.1109/tmi.2021.3089547] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Detecting synaptic clefts is a crucial step to investigate the biological function of synapses. The volume electron microscopy (EM) allows the identification of synaptic clefts by photoing EM images with high resolution and fine details. Machine learning approaches have been employed to automatically predict synaptic clefts from EM images. In this work, we propose a novel and augmented deep learning model, known as CleftNet, for improving synaptic cleft detection from brain EM images. We first propose two novel network components, known as the feature augmentor and the label augmentor, for augmenting features and labels to improve cleft representations. The feature augmentor can fuse global information from inputs and learn common morphological patterns in clefts, leading to augmented cleft features. In addition, it can generate outputs with varying dimensions, making it flexible to be integrated in any deep network. The proposed label augmentor augments the label of each voxel from a value to a vector, which contains both the segmentation label and boundary label. This allows the network to learn important shape information and to produce more informative cleft representations. Based on the proposed feature augmentor and label augmentor, We build the CleftNet as a U-Net like network. The effectiveness of our methods is evaluated on both external and internal tasks. Our CleftNet currently ranks #1 on the external task of the CREMI open challenge. In addition, both quantitative and qualitative results in the internal tasks show that our method outperforms the baseline approaches significantly.
Collapse
|
11
|
Zhang Z, Jiang H, Liu J, Shi T. Improving the fidelity of CT image colorization based on pseudo-intensity model and tumor metabolism enhancement. Comput Biol Med 2021; 138:104885. [PMID: 34626914 DOI: 10.1016/j.compbiomed.2021.104885] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 09/17/2021] [Accepted: 09/18/2021] [Indexed: 10/20/2022]
Abstract
BACKGROUND Subject to the principle of imaging, most medical images are gray-scale images. Human eyes are more sensitive to color images compared to gray-scale images. The state-of-the-art medical image colorization results are unnatural and unrealistic, especially in some organs, such as the lung field. METHOD We propose a CT image colorization network that consists of a pseudo-intensity model, tumor metabolic enhancement, and MemoPainter-cGAN colorization network. First, the distributions of both the density of CT images and the intensity of anatomical images are analyzed with the aim of building a pseudo-intensity model. Then, the PET images, which are sensitive to tumor metabolism, are used to highlight the tumor regions. Finally, the MemoPainter-cGAN is used to generate colorized anatomical images. RESULTS Our experiment verified that the mean structural similarity between the colorized images and the original color images is 0.995, which indicates that the colorized image maintains the features of the original images enormously. The average image information entropy is 6.62, which is 13.4% higher than that of the images before metabolism enhancement and colorization. It indicates that the image fidelity is significantly improved. CONCLUSIONS Our method can generate vivid and fresh anatomical images based on prior knowledge of tissue or organ intensity. The colorized PET/CT images with abundant anatomical knowledge and high sensitivity of metabolic information provide radiologists with access to a new modality that offers additional reference information.
Collapse
Affiliation(s)
- Zexu Zhang
- Software College, Northeastern University, No.195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| | - Huiyan Jiang
- Software College, Northeastern University, No.195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China; Key Laboratory of Intelligent Computing in Biomedical Image, Ministry of Education, Northeastern University, No.195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China.
| | - Jiaji Liu
- Software College, Northeastern University, No.195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| | - Tianyu Shi
- Software College, Northeastern University, No.195, Chuangxin Road, Hunnan District, Shenyang, 110169, Liaoning, China
| |
Collapse
|
12
|
Zhang G, Ning B, Hui H, Yu T, Yang X, Zhang H, Tian J, He W. Image-to-Images Translation for Multiple Virtual Histological Staining of Unlabeled Human Carotid Atherosclerotic Tissue. Mol Imaging Biol 2021; 24:31-41. [PMID: 34622424 DOI: 10.1007/s11307-021-01641-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Revised: 07/19/2021] [Accepted: 08/12/2021] [Indexed: 11/29/2022]
Abstract
PURPOSE Histological analysis of human carotid atherosclerotic plaques is critical in understanding atherosclerosis biology and developing effective plaque prevention and treatment for ischemic stroke. However, the histological staining process is laborious, tedious, variable, and destructive to the highly valuable atheroma tissue obtained from patients. PROCEDURES We proposed a deep learning-based method to simultaneously transfer bright-field microscopic images of unlabeled tissue sections into equivalent multiple sections of the same samples that are virtually stained. Using a pix2pix model, we trained a generative adversarial neural network to achieve image-to-images translation of multiple stains, including hematoxylin and eosin (H&E), picrosirius red (PSR), and Verhoeff van Gieson (EVG) stains. RESULTS The quantification of evaluation metrics indicated that the proposed approach achieved the best performance in comparison with other state-of-the-art methods. Further blind evaluation by board-certified pathologists demonstrated that the multiple virtual stains have high consistency with standard histological stains. The proposed approach also indicated that the generated histopathological features of atherosclerotic plaques, such as the necrotic core, neovascularization, cholesterol crystals, collagen, and elastic fibers, are optimally matched with those of standard histological stains. CONCLUSIONS The proposed approach allows for the virtual staining of unlabeled human carotid plaque tissue images with multiple types of stains. In addition, it identifies the histopathological features of atherosclerotic plaques in the same tissue sample, which could facilitate the development of personalized prevention and other interventional treatments for carotid atherosclerosis.
Collapse
Affiliation(s)
- Guanghao Zhang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China.,CAS Key Laboratory of Molecular Imaging, Institute of Automation, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Bin Ning
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Hui Hui
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Tengfei Yu
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Hongxia Zhang
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China. .,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine, Beihang University, Beijing, 100083, China. .,Zhuhai Precision Medical Center, Zhuhai People's Hospital, Affiliated With Jinan University, Zhuhai, 519000, China.
| | - Wen He
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China.
| |
Collapse
|
13
|
Helgadottir S, Midtvedt B, Pineda J, Sabirsh A, B. Adiels C, Romeo S, Midtvedt D, Volpe G. Extracting quantitative biological information from bright-field cell images using deep learning. BIOPHYSICS REVIEWS 2021; 2:031401. [PMID: 38505631 PMCID: PMC10903417 DOI: 10.1063/5.0044782] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 06/23/2021] [Indexed: 03/21/2024]
Abstract
Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time consuming, labor intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of bright-field images by a conditional generative adversarial neural network (cGAN). We show that this is a robust and fast-converging approach to generate virtually stained images from the bright-field images and, in subsequent downstream analyses, to quantify the properties of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually stained images to extract quantitative measures about these cell structures. Generating virtually stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell. To make this deep-learning-powered approach readily available for other users, we provide a Python software package, which can be easily personalized and optimized for specific virtual-staining and cell-profiling applications.
Collapse
Affiliation(s)
- Saga Helgadottir
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | | | - Jesús Pineda
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Alan Sabirsh
- Advanced Drug Delivery, Pharmaceutical Sciences, R&D, AstraZeneca, Gothenburg, Sweden
| | | | | | - Daniel Midtvedt
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Giovanni Volpe
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
14
|
A Light-Weight Practical Framework for Feces Detection and Trait Recognition. SENSORS 2020; 20:s20092644. [PMID: 32384651 PMCID: PMC7248729 DOI: 10.3390/s20092644] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 05/04/2020] [Accepted: 05/05/2020] [Indexed: 12/14/2022]
Abstract
Fecal trait examinations are critical in the clinical diagnosis of digestive diseases, and they can effectively reveal various aspects regarding the health of the digestive system. An automatic feces detection and trait recognition system based on a visual sensor could greatly alleviate the burden on medical inspectors and overcome many sanitation problems, such as infections. Unfortunately, the lack of digital medical images acquired with camera sensors due to patient privacy has obstructed the development of fecal examinations. In general, the computing power of an automatic fecal diagnosis machine or a mobile computer-aided diagnosis device is not always enough to run a deep network. Thus, a light-weight practical framework is proposed, which consists of three stages: illumination normalization, feces detection, and trait recognition. Illumination normalization effectively suppresses the illumination variances that degrade the recognition accuracy. Neither the shape nor the location is fixed, so shape-based and location-based object detection methods do not work well in this task. Meanwhile, this leads to a difficulty in labeling the images for training convolutional neural networks (CNN) in detection. Our segmentation scheme is free from training and labeling. The feces object is accurately detected with a well-designed threshold-based segmentation scheme on the selected color component to reduce the background disturbance. Finally, the preprocessed images are categorized into five classes with a light-weight shallow CNN, which is suitable for feces trait examinations in real hospital environments. The experiment results from our collected dataset demonstrate that our framework yields a satisfactory accuracy of 98.4%, while requiring low computational complexity and storage.
Collapse
|