1
|
Sobotka D, Herold A, Perkonigg M, Beer L, Bastati N, Sablatnig A, Ba-Ssalamah A, Langs G. Improving Vessel Segmentation with Multi-Task Learning and Auxiliary Data Available Only During Model Training. Comput Med Imaging Graph 2024; 114:102369. [PMID: 38518411 DOI: 10.1016/j.compmedimag.2024.102369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 03/13/2024] [Accepted: 03/13/2024] [Indexed: 03/24/2024]
Abstract
Liver vessel segmentation in magnetic resonance imaging data is important for the computational analysis of vascular remodeling, associated with a wide spectrum of diffuse liver diseases. Existing approaches rely on contrast enhanced imaging data, but the necessary dedicated imaging sequences are not uniformly acquired. Images without contrast enhancement are acquired more frequently, but vessel segmentation is challenging, and requires large-scale annotated data. We propose a multi-task learning framework to segment vessels in liver MRI without contrast. It exploits auxiliary contrast enhanced MRI data available only during training to reduce the need for annotated training examples. Our approach draws on paired native and contrast enhanced data with and without vessel annotations for model training. Results show that auxiliary data improves the accuracy of vessel segmentation, even if they are not available during inference. The advantage is most pronounced if only few annotations are available for training, since the feature representation benefits from the shared task structure. A validation of this approach to augment a model for brain tumor segmentation confirms its benefits across different domains. An auxiliary informative imaging modality can augment expert annotations even if it is only available during training.
Collapse
Affiliation(s)
- Daniel Sobotka
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Alexander Herold
- Division of General and Paediatric Radiology, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Matthias Perkonigg
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria; Department of Medical Statistics, Informatics and Health Economics, Medical University of Innsbruck, Innsbruck, Austria
| | - Lucian Beer
- Division of General and Paediatric Radiology, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Nina Bastati
- Division of General and Paediatric Radiology, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Alina Sablatnig
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Ahmed Ba-Ssalamah
- Division of General and Paediatric Radiology, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Georg Langs
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
2
|
Deng L, Zou Y, Yang X, Wang J, Huang S. L2NLF: a novel linear-to-nonlinear framework for multi-modal medical image registration. Biomed Eng Lett 2024; 14:497-509. [PMID: 38645595 PMCID: PMC11026354 DOI: 10.1007/s13534-023-00344-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 10/29/2023] [Accepted: 12/11/2023] [Indexed: 04/23/2024] Open
Abstract
In recent years, deep learning has ushered in significant development in medical image registration, and the method of non-rigid registration using deep neural networks to generate a deformation field has higher accuracy. However, unlike monomodal medical image registration, multimodal medical image registration is a more complex and challenging task. This paper proposes a new linear-to-nonlinear framework (L2NLF) for multimodal medical image registration. The first linear stage is essentially image conversion, which can reduce the difference between two images without changing the authenticity of medical images, thus transforming multimodal registration into monomodal registration. The second nonlinear stage is essentially unsupervised deformable registration based on the deep neural network. In this paper, a brand-new registration network, CrossMorph, is designed, a deep neural network similar to the U-net structure. As the backbone of the encoder, the volume CrossFormer block can better extract local and global information. Booster promotes the reduction of more deep features and shallow features. The qualitative and quantitative experimental results on T1 and T2 data of 240 patients' brains show that L2NLF can achieve excellent registration effect in the image conversion part with very low computation, and it will not change the authenticity of the converted image at all. Compared with the current state-of-the-art registration method, CrossMorph can effectively reduce average surface distance, improve dice score, and improve the deformation field's smoothness. The proposed methods have potential value in clinical application.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 China
| | - Yanchao Zou
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 China
| | - Xin Yang
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060 Guangdong China
| | - Jing Wang
- Institute for Brain Research and Rehabilitation, South China Normal University, Zhongshan Avenue, Guangzhou, 510631 China
| | - Sijuan Huang
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060 Guangdong China
| |
Collapse
|
3
|
Bottani S, Thibeau-Sutre E, Maire A, Ströer S, Dormont D, Colliot O, Burgos N. Contrast-enhanced to non-contrast-enhanced image translation to exploit a clinical data warehouse of T1-weighted brain MRI. BMC Med Imaging 2024; 24:67. [PMID: 38504179 PMCID: PMC10953143 DOI: 10.1186/s12880-024-01242-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 03/07/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Clinical data warehouses provide access to massive amounts of medical images, but these images are often heterogeneous. They can for instance include images acquired both with or without the injection of a gadolinium-based contrast agent. Harmonizing such data sets is thus fundamental to guarantee unbiased results, for example when performing differential diagnosis. Furthermore, classical neuroimaging software tools for feature extraction are typically applied only to images without gadolinium. The objective of this work is to evaluate how image translation can be useful to exploit a highly heterogeneous data set containing both contrast-enhanced and non-contrast-enhanced images from a clinical data warehouse. METHODS We propose and compare different 3D U-Net and conditional GAN models to convert contrast-enhanced T1-weighted (T1ce) into non-contrast-enhanced (T1nce) brain MRI. These models were trained using 230 image pairs and tested on 77 image pairs from the clinical data warehouse of the Greater Paris area. RESULTS Validation using standard image similarity measures demonstrated that the similarity between real and synthetic T1nce images was higher than between real T1nce and T1ce images for all the models compared. The best performing models were further validated on a segmentation task. We showed that tissue volumes extracted from synthetic T1nce images were closer to those of real T1nce images than volumes extracted from T1ce images. CONCLUSION We showed that deep learning models initially developed with research quality data could synthesize T1nce from T1ce images of clinical quality and that reliable features could be extracted from the synthetic images, thus demonstrating the ability of such methods to help exploit a data set coming from a clinical data warehouse.
Collapse
Affiliation(s)
- Simona Bottani
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Elina Thibeau-Sutre
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Aurélien Maire
- Innovation & Données - Département des Services Numériques, AP-HP, Paris, 75013, France
| | - Sebastian Ströer
- Hôpital Pitié Salpêtrière, Department of Neuroradiology, AP-HP, Paris, 75012, France
| | - Didier Dormont
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, DMU DIAMENT, Paris, 75013, France
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Ninon Burgos
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France.
| |
Collapse
|
4
|
Matsunaga T, Kono A, Matsuo H, Kitagawa K, Nishio M, Hashimura H, Izawa Y, Toba T, Ishikawa K, Katsuki A, Ohmura K, Murakami T. Development of Pericardial Fat Count Images Using a Combination of Three Different Deep-Learning Models: Image Translation Model From Chest Radiograph Image to Projection Image of Three-Dimensional Computed Tomography. Acad Radiol 2024; 31:822-829. [PMID: 37914626 DOI: 10.1016/j.acra.2023.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 09/07/2023] [Accepted: 09/12/2023] [Indexed: 11/03/2023]
Abstract
RATIONALE AND OBJECTIVES Pericardial fat (PF)-the thoracic visceral fat surrounding the heart-promotes the development of coronary artery disease by inducing inflammation of the coronary arteries. To evaluate PF, we generated pericardial fat count images (PFCIs) from chest radiographs (CXRs) using a dedicated deep-learning model. MATERIALS AND METHODS We reviewed data of 269 consecutive patients who underwent coronary computed tomography (CT). We excluded patients with metal implants, pleural effusion, history of thoracic surgery, or malignancy. Thus, the data of 191 patients were used. We generated PFCIs from the projection of three-dimensional CT images, wherein fat accumulation was represented by a high pixel value. Three different deep-learning models, including CycleGAN were combined in the proposed method to generate PFCIs from CXRs. A single CycleGAN-based model was used to generate PFCIs from CXRs for comparison with the proposed method. To evaluate the image quality of the generated PFCIs, structural similarity index measure (SSIM), mean squared error (MSE), and mean absolute error (MAE) of (i) the PFCI generated using the proposed method and (ii) the PFCI generated using the single model were compared. RESULTS The mean SSIM, MSE, and MAE were 8.56 × 10-1, 1.28 × 10-2, and 3.57 × 10-2, respectively, for the proposed model, and 7.62 × 10-1, 1.98 × 10-2, and 5.04 × 10-2, respectively, for the single CycleGAN-based model. CONCLUSION PFCIs generated from CXRs with the proposed model showed better performance than those generated with the single model. The evaluation of PF without CT may be possible using the proposed method.
Collapse
Affiliation(s)
- Takaaki Matsunaga
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan (T.M., A.K., H.M., H.H., T.M.)
| | - Atsushi Kono
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan (T.M., A.K., H.M., H.H., T.M.)
| | - Hidetoshi Matsuo
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan (T.M., A.K., H.M., H.H., T.M.)
| | - Kaoru Kitagawa
- Center for Radiology and Radiation Oncology, Kobe University Hospital, Kobe, Japan (K.K., K.I.)
| | - Mizuho Nishio
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan (T.M., A.K., H.M., H.H., T.M.).
| | - Hiromi Hashimura
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan (T.M., A.K., H.M., H.H., T.M.)
| | - Yu Izawa
- Division of Cardiovascular Medicine, Department of Internal Medicine, Kobe University Graduate School of Medicine, Kobe, Japan (Y.I., T.T.)
| | - Takayoshi Toba
- Division of Cardiovascular Medicine, Department of Internal Medicine, Kobe University Graduate School of Medicine, Kobe, Japan (Y.I., T.T.)
| | - Kazuki Ishikawa
- Center for Radiology and Radiation Oncology, Kobe University Hospital, Kobe, Japan (K.K., K.I.)
| | | | | | - Takamichi Murakami
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan (T.M., A.K., H.M., H.H., T.M.)
| |
Collapse
|
5
|
Tiwary P, Bhattacharyya K, A P P. Cycle consistent twin energy-based models for image-to- image translation. Med Image Anal 2024; 91:103031. [PMID: 37988920 DOI: 10.1016/j.media.2023.103031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 09/10/2023] [Accepted: 11/13/2023] [Indexed: 11/23/2023]
Abstract
Domain shift refers to change of distributional characteristics between the training (source) and the testing (target) datasets of a learning task, leading to performance drop. For tasks involving medical images, domain shift may be caused because of several factors such as change in underlying imaging modalities, measuring devices and staining mechanisms. Recent approaches address this issue via generative models based on the principles of adversarial learning albeit they suffer from issues such as difficulty in training and lack of diversity. Motivated by the aforementioned observations, we adapt an alternative class of deep generative models called the Energy-Based Models (EBMs) for the task of unpaired image-to-image translation of medical images. Specifically, we propose a novel method called the Cycle Consistent Twin EBMs (CCT-EBM) which employs a pair of EBMs in the latent space of an Auto-Encoder trained on the source data. While one of the EBMs translates the source to the target domain the other does vice-versa along with a novel consistency loss, ensuring translation symmetry and coupling between the domains. We theoretically analyze the proposed method and show that our design leads to better translation between the domains with reduced langevin mixing steps. We demonstrate the efficacy of our method through detailed quantitative and qualitative experiments on image segmentation tasks on three different datasets vis-a-vis state-of-the-art methods.
Collapse
Affiliation(s)
- Piyush Tiwary
- Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore, Karnataka 560012, India.
| | - Kinjawl Bhattacharyya
- Department of Electrical Engineering, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India
| | - Prathosh A P
- Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore, Karnataka 560012, India
| |
Collapse
|
6
|
Liu CH, Fu LW, Chen HH, Huang SL. Toward cell nuclei precision between OCT and H&E images translation using signal-to-noise ratio cycle-consistency. Comput Methods Programs Biomed 2023; 242:107824. [PMID: 37832427 DOI: 10.1016/j.cmpb.2023.107824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 08/31/2023] [Accepted: 09/19/2023] [Indexed: 10/15/2023]
Abstract
Medical image-to-image translation is often difficult and of limited effectiveness due to the differences in image acquisition mechanisms and the diverse structure of biological tissues. This work presents an unpaired image translation model between in-vivo optical coherence tomography (OCT) and ex-vivo Hematoxylin and eosin (H&E) stained images without the need for image stacking, registration, post-processing, and annotation. The model can generate high-quality and highly accurate virtual medical images, and is robust and bidirectional. Our framework introduces random noise to (1) blur redundant features, (2) defend against self-adversarial attacks, (3) stabilize inverse conversion, and (4) mitigate the impact of OCT speckles. We also demonstrate that our model can be pre-trained and then fine-tuned using images from different OCT systems in just a few epochs. Qualitative and quantitative comparisons with traditional image-to-image translation models show the robustness of our proposed signal-to-noise ratio (SNR) cycle-consistency method.
Collapse
Affiliation(s)
- Chih-Hao Liu
- Graduate Institute of Photonics and Optoelectronics, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Li-Wei Fu
- Graduate Institute of Communication Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Homer H Chen
- Graduate Institute of Communication Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Department of Electrical Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Graduate Institute of Networking and Multimedia, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Sheng-Lung Huang
- Graduate Institute of Photonics and Optoelectronics, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Department of Electrical Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; All Vista Healthcare Center, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| |
Collapse
|
7
|
Jeong J, Wentland A, Mastrodicasa D, Fananapazir G, Wang A, Banerjee I, Patel BN. Synthetic dual-energy CT reconstruction from single-energy CT Using artificial intelligence. Abdom Radiol (NY) 2023; 48:3537-3549. [PMID: 37665385 DOI: 10.1007/s00261-023-04004-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 06/30/2023] [Accepted: 07/03/2023] [Indexed: 09/05/2023]
Abstract
PURPOSE To develop and assess the utility of synthetic dual-energy CT (sDECT) images generated from single-energy CT (SECT) using two state-of-the-art generative adversarial network (GAN) architectures for artificial intelligence-based image translation. METHODS In this retrospective study, 734 patients (389F; 62.8 years ± 14.9) who underwent enhanced DECT of the chest, abdomen, and pelvis between January 2018 and June 2019 were included. Using 70-keV as the input images (n = 141,009) and 50-keV, iodine, and virtual unenhanced (VUE) images as outputs, separate models were trained using Pix2PixHD and CycleGAN. Model performance on the test set (n = 17,839) was evaluated using mean squared error, structural similarity index, and peak signal-to-noise ratio. To objectively test the utility of these models, synthetic iodine material density and 50-keV images were generated from SECT images of 16 patients with gastrointestinal bleeding performed at another institution. The conspicuity of gastrointestinal bleeding using sDECT was compared to portal venous phase SECT. Synthetic VUE images were generated from 37 patients who underwent a CT urogram at another institution and model performance was compared to true unenhanced images. RESULTS sDECT from both Pix2PixHD and CycleGAN were qualitatively indistinguishable from true DECT by a board-certified radiologist (avg accuracy 64.5%). Pix2PixHD had better quantitative performance compared to CycleGAN (e.g., structural similarity index for iodine: 87% vs. 46%, p-value < 0.001). sDECT using Pix2PixHD showed increased bleeding conspicuity for gastrointestinal bleeding and better removal of iodine on synthetic VUE compared to CycleGAN. CONCLUSIONS sDECT from SECT using Pix2PixHD may afford some of the advantages of DECT.
Collapse
Affiliation(s)
- Jiwoong Jeong
- Department of Radiology, Mayo Clinic, 13400 E. Shea Blvd, Scottsdale, AZ, 85259, USA.
- School of Computing and Augmented Intelligence, Arizona State University, 699 S Mill Ave, Tempe, AZ, 85281, USA.
| | - Andrew Wentland
- Department of Radiology, University of Wisconsin, 600 Highland Ave, Madison, WI, 53792, USA
| | - Domenico Mastrodicasa
- Department of Radiology, Stanford University, 300 Pasteur Dr., Stanford, CA, 94305, USA
| | - Ghaneh Fananapazir
- Department of Radiology, University of California Davis, 4860 Y Street, Suite 3100, Sacramento, CA, 95817, USA
| | - Adam Wang
- Department of Radiology, Stanford University, 300 Pasteur Dr., Stanford, CA, 94305, USA
| | - Imon Banerjee
- Department of Radiology, Mayo Clinic, 13400 E. Shea Blvd, Scottsdale, AZ, 85259, USA
| | - Bhavik N Patel
- Department of Radiology, Mayo Clinic, 13400 E. Shea Blvd, Scottsdale, AZ, 85259, USA
| |
Collapse
|
8
|
Xu X, Chen Y, Wu J, Lu J, Ye Y, Huang Y, Dou X, Li K, Wang G, Zhang S, Gong W. A novel one-to-multiple unsupervised domain adaptation framework for abdominal organ segmentation. Med Image Anal 2023; 88:102873. [PMID: 37421932 DOI: 10.1016/j.media.2023.102873] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 03/24/2023] [Accepted: 06/12/2023] [Indexed: 07/10/2023]
Abstract
Abdominal multi-organ segmentation in multi-sequence magnetic resonance images (MRI) is of great significance in many clinical scenarios, e.g., MRI-oriented pre-operative treatment planning. Labeling multiple organs on a single MR sequence is a time-consuming and labor-intensive task, let alone manual labeling on multiple MR sequences. Training a model by one sequence and generalizing it to other domains is one way to reduce the burden of manual annotation, but the existence of domain gap often leads to poor generalization performance of such methods. Image translation-based unsupervised domain adaptation (UDA) is a common way to address this domain gap issue. However, existing methods focus less on keeping anatomical consistency and are limited by one-to-one domain adaptation, leading to low efficiency for adapting a model to multiple target domains. This work proposes a unified framework called OMUDA for one-to-multiple unsupervised domain-adaptive segmentation, where disentanglement between content and style is used to efficiently translate a source domain image into multiple target domains. Moreover, generator refactoring and style constraint are conducted in OMUDA for better maintaining cross-modality structural consistency and reducing domain aliasing. The average Dice Similarity Coefficients (DSCs) of OMUDA for multiple sequences and organs on the in-house test set, the AMOS22 dataset and the CHAOS dataset are 85.51%, 82.66% and 91.38%, respectively, which are slightly lower than those of CycleGAN(85.66% and 83.40%) in the first two data sets and slightly higher than CycleGAN(91.36%) in the last dataset. But compared with CycleGAN, OMUDA reduces floating-point calculations by about 87 percent in the training phase and about 30 percent in the inference stage respectively. The quantitative results in both segmentation performance and training efficiency demonstrate the usability of OMUDA in some practical scenes, such as the initial phase of product development.
Collapse
Affiliation(s)
- Xiaowei Xu
- SenseTime Research, Shanghai, China; School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Yinan Chen
- SenseTime Research, Shanghai, China; School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China; West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China.
| | - Jianghao Wu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Jiangshan Lu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | | | | | - Xin Dou
- SenseBrain Technology, Princeton, NJ 08540, USA
| | - Kang Li
- Shanghai Artificial Intelligence Laboratory, Shanghai, China; West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China; Med-X Center for Informatics, Sichuan University, Chengdu, Sichuan, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Shaoting Zhang
- SenseTime Research, Shanghai, China; School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Wei Gong
- Department of General Surgery, Xinhua Hospital, Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China; Shanghai Key Laboratory of Biliary Tract Disease Research, Shanghai, 200092, China
| |
Collapse
|
9
|
Liu X, Li B, Liu C, Ta D. Virtual Fluorescence Translation for Biological Tissue by Conditional Generative Adversarial Network. Phenomics 2023; 3:408-420. [PMID: 37589024 PMCID: PMC10425324 DOI: 10.1007/s43657-023-00094-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/03/2023] [Accepted: 01/05/2023] [Indexed: 08/18/2023]
Abstract
Fluorescence labeling and imaging provide an opportunity to observe the structure of biological tissues, playing a crucial role in the field of histopathology. However, when labeling and imaging biological tissues, there are still some challenges, e.g., time-consuming tissue preparation steps, expensive reagents, and signal bias due to photobleaching. To overcome these limitations, we present a deep-learning-based method for fluorescence translation of tissue sections, which is achieved by conditional generative adversarial network (cGAN). Experimental results from mouse kidney tissues demonstrate that the proposed method can predict the other types of fluorescence images from one raw fluorescence image, and implement the virtual multi-label fluorescent staining by merging the generated different fluorescence images as well. Moreover, this proposed method can also effectively reduce the time-consuming and laborious preparation in imaging processes, and further saves the cost and time. Supplementary Information The online version contains supplementary material available at 10.1007/s43657-023-00094-1.
Collapse
Affiliation(s)
- Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
- State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, 200433 China
| | - Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
| | - Chengcheng Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China
- Center for Biomedical Engineering, Fudan University, Shanghai, 200433 China
| |
Collapse
|
10
|
Kumar A, Kaushal M, Sharma A. SAM C-GAN: a method for removal of face masks from masked faces. Signal Image Video Process 2023; 17:1-9. [PMID: 37362232 PMCID: PMC10213599 DOI: 10.1007/s11760-023-02602-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 03/08/2023] [Accepted: 04/14/2023] [Indexed: 06/28/2023]
Abstract
The past years of COVID-19 have attracted researchers to carry out benchmark work in face mask detection. However, the existing work does not focus on the problem of reconstructing the face area behind the mask and completing the face that can be used for face recognition. In order to address this problem, in this work we have proposed a spatial attention module-based conditional generative adversarial network method that can generate plausible images of faces without masks by removing the face masks from the face region. The method proposed in this work utilizes a self-created dataset consisting of faces with three types of face masks for training and testing purposes. With the proposed method, an SSIM value of 0.91231 which is 3.89% higher and a PSNR value of 30.9879 which is 3.17% higher has been obtained as compared to the vanilla C-GAN method.
Collapse
Affiliation(s)
- Akhil Kumar
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu India
| | - Manisha Kaushal
- CSED, Derabassi Campus, Thapar Institute of Engineering and Technology, Punjab, India
| | | |
Collapse
|
11
|
Roy M, Wang F, Teodoro G, Bhattarai S, Bhargava M, Rekha TS, Aneja R, Kong J. Deep learning based registration of serial whole-slide histopathology images in different stains. J Pathol Inform 2023; 14:100311. [PMID: 37214150 PMCID: PMC10193019 DOI: 10.1016/j.jpi.2023.100311] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/11/2023] [Accepted: 04/12/2023] [Indexed: 05/24/2023] Open
Abstract
For routine pathology diagnosis and imaging-based biomedical research, Whole-slide image (WSI) analyses have been largely limited to a 2D tissue image space. For a more definitive tissue representation to support fine-resolution spatial and integrative analyses, it is critical to extend such tissue-based investigations to a 3D tissue space with spatially aligned serial tissue WSIs in different stains, such as Hematoxylin and Eosin (H&E) and Immunohistochemistry (IHC) biomarkers. However, such WSI registration is technically challenged by the overwhelming image scale, the complex histology structure change, and the significant difference in tissue appearances in different stains. The goal of this study is to register serial sections from multi-stain histopathology whole-slide image blocks. We propose a novel translation-based deep learning registration network CGNReg that spatially aligns serial WSIs stained in H&E and by IHC biomarkers without prior deformation information for the model training. First, synthetic IHC images are produced from H&E slides through a robust image synthesis algorithm. Next, the synthetic and the real IHC images are registered through a Fully Convolutional Network with multi-scaled deformable vector fields and a joint loss optimization. We perform the registration at the full image resolution, retaining the tissue details in the results. Evaluated with a dataset of 76 breast cancer patients with 1 H&E and 2 IHC serial WSIs for each patient, CGNReg presents promising performance as compared with multiple state-of-the-art systems in our evaluation. Our results suggest that CGNReg can produce promising registration results with serial WSIs in different stains, enabling integrative 3D tissue-based biomedical investigations.
Collapse
Affiliation(s)
- Mousumi Roy
- Department of Computer Science, Stony Brook University, NY 11794, USA
| | - Fusheng Wang
- Department of Computer Science, Stony Brook University, NY 11794, USA
- Department of Biomedical Informatics, Stony Brook University, NY 11794, USA
| | - George Teodoro
- Department of Computer Science, Federal University of Minas Gerais, Belo Horizonte 31270-901, Brazil
| | - Shristi Bhattarai
- Department of Clinical and Diagnostic Sciences, School of Health Profession, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Mahak Bhargava
- Department of Clinical and Diagnostic Sciences, School of Health Profession, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - T. Subbanna Rekha
- Department of Pathology, JSS Medical College, JSS Academy of Higher Education and Research, Mysuru, Karnataka 570009, India
| | - Ritu Aneja
- Department of Clinical and Diagnostic Sciences, School of Health Profession, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Jun Kong
- Department of Mathematics and Statistics, Georgia State University, Atlanta, GA 30303, USA
- Department of Computer Science and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
12
|
Fatania K, Clark A, Frood R, Scarsbrook A, Al-Qaisieh B, Currie S, Nix M. Harmonisation of scanner-dependent contrast variations in magnetic resonance imaging for radiation oncology, using style-blind auto-encoders. Phys Imaging Radiat Oncol 2022; 22:115-122. [PMID: 35619643 PMCID: PMC9127401 DOI: 10.1016/j.phro.2022.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 11/20/2022] Open
Abstract
Background and purpose Magnetic Resonance Imaging (MRI) exhibits scanner dependent contrast, which limits generalisability of radiomics and machine-learning for radiation oncology. Current deep-learning harmonisation requires paired data, retraining for new scanners and often suffers from geometry-shift which alters anatomical information. The aim of this study was to investigate style-blind auto-encoders for MRI harmonisation to accommodate unpaired training data, avoid geometry-shift and harmonise data from previously unseen scanners. Materials and methods A style-blind auto-encoder, using adversarial classification on the latent-space, was designed for MRI harmonisation. The public CC359 T1-w MRI brain dataset includes six scanners (three manufacturers, two field strengths), of which five were used for training. MRI from all six (including one unseen) scanner were harmonised to common contrast. Harmonisation extent was quantified via Kolmogorov-Smirnov testing of residual scanner dependence of 3D radiomic features, and compared to WhiteStripe normalisation. Anatomical content preservation was measured through change in structural similarity index on contrast-cycling (δSSIM). Results The percentage of radiomics features showing statistically significant scanner-dependence was reduced from 41% (WhiteStripe) to 16% for white matter and from 39% to 27% for grey matter. δSSIM < 0.0025 on harmonisation and de-harmonisation indicated excellent anatomical content preservation. Conclusions Our method harmonised MRI contrast effectively, preserved critical anatomical details at high fidelity, trained on unpaired data and allowed zero-shot harmonisation. Robust and clinically translatable harmonisation of MRI will enable generalisable radiomic and deep-learning models for a range of applications, including radiation oncology treatment stratification, planning and response monitoring.
Collapse
Affiliation(s)
- Kavi Fatania
- Department of Radiology, St James University Hospital Trust, Beckett Street, Leeds LS9 7TF, UK
| | - Anna Clark
- Leeds Cancer Centre, Bexley Wing, St James University Hospital Trust, Beckett Street, Leeds LS9 7TF, UK
| | - Russell Frood
- Department of Radiology, St James University Hospital Trust, Beckett Street, Leeds LS9 7TF, UK
| | - Andrew Scarsbrook
- Department of Radiology, St James University Hospital Trust, Beckett Street, Leeds LS9 7TF, UK
| | - Bashar Al-Qaisieh
- Leeds Cancer Centre, Bexley Wing, St James University Hospital Trust, Beckett Street, Leeds LS9 7TF, UK
| | - Stuart Currie
- Department of Radiology, St James University Hospital Trust, Beckett Street, Leeds LS9 7TF, UK
| | - Michael Nix
- Leeds Cancer Centre, Bexley Wing, St James University Hospital Trust, Beckett Street, Leeds LS9 7TF, UK
| |
Collapse
|
13
|
Cho K, Seo J, Kyung S, Kim M, Hong GS, Kim N. Bone suppression on pediatric chest radiographs via a deep learning-based cascade model. Comput Methods Programs Biomed 2022; 215:106627. [PMID: 35032722 DOI: 10.1016/j.cmpb.2022.106627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 12/05/2021] [Accepted: 01/07/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Bone suppression images (BSIs) of chest radiographs (CXRs) have been proven to improve diagnosis of pulmonary diseases. To acquire BSIs, dual-energy subtraction (DES) or a deep-learning-based model trained with DES-based BSIs have been used. However, neither technique could be applied to pediatric patients owing to the harmful effects of DES. In this study, we developed a novel method for bone suppression in pediatric CXRs. METHODS First, a model using digitally reconstructed radiographs (DRRs) of adults, which were used to generate pseudo-CXRs from computed tomography images, was developed by training a 2-channel contrastive-unpaired-image-translation network. Second, this model was applied to 129 pediatric DRRs to generate the paired training data of pseudo-pediatric CXRs. Finally, by training a U-Net with these paired data, a bone suppression model for pediatric CXRs was developed. RESULTS The evaluation metrics were peak signal to noise ratio, root mean absolute error and structural similarity index measure at soft-tissue and bone region of the lung. In addition, an expert radiologist scored the effectiveness of BSIs on a scale of 1-5. The obtained result of 3.31 ± 0.48 indicates that the BSIs show homogeneous bone removal despite subtle residual bone shadow. CONCLUSION Our method shows that the pixel intensity at soft-tissue regions was preserved, and bones were well subtracted; this can be useful for detecting early pulmonary disease in pediatric CXRs.
Collapse
Affiliation(s)
- Kyungjin Cho
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Jiyeon Seo
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Mingyu Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul 05505, Korea
| | - Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine & Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu Seoul 05505, Republic of Korea.
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul 05505, Korea; Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505, Republic of Korea.
| |
Collapse
|
14
|
Marzullo A, Moccia S, Catellani M, Calimeri F, Momi ED. Towards realistic laparoscopic image generation using image-domain translation. Comput Methods Programs Biomed 2021; 200:105834. [PMID: 33229016 DOI: 10.1016/j.cmpb.2020.105834] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 11/05/2020] [Indexed: 06/11/2023]
Abstract
Background and ObjectivesOver the last decade, Deep Learning (DL) has revolutionized data analysis in many areas, including medical imaging. However, there is a bottleneck in the advancement of DL in the surgery field, which can be seen in a shortage of large-scale data, which in turn may be attributed to the lack of a structured and standardized methodology for storing and analyzing surgical images in clinical centres. Furthermore, accurate annotations manually added are expensive and time consuming. A great help can come from the synthesis of artificial images; in this context, in the latest years, the use of Generative Adversarial Neural Networks (GANs) achieved promising results in obtaining photo-realistic images. MethodsIn this study, a method for Minimally Invasive Surgery (MIS) image synthesis is proposed. To this aim, the generative adversarial network pix2pix is trained to generate paired annotated MIS images by transforming rough segmentation of surgical instruments and tissues into realistic images. An additional regularization term was added to the original optimization problem, in order to enhance realism of surgical tools with respect to the background. Results Quantitative and qualitative (i.e., human-based) evaluations of generated images have been carried out in order to assess the effectiveness of the method. ConclusionsExperimental results show that the proposed method is actually able to translate MIS segmentations to realistic MIS images, which can in turn be used to augment existing data sets and help at overcoming the lack of useful images; this allows physicians and algorithms to take advantage from new annotated instances for their training.
Collapse
Affiliation(s)
- Aldo Marzullo
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy.
| | - Sara Moccia
- Department of Information Engineering, Unviersitá Politecnica delle Marche, Ancona, Italy; Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Michele Catellani
- Department of urology, European Institute of Oncology, IRCCS, Milan, Italy
| | - Francesco Calimeri
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
15
|
Haiderbhai M, Ledesma S, Lee SC, Seibold M, Fürnstahl P, Navab N, Fallavollita P. pix2xray: converting RGB images into X-rays using generative adversarial networks. Int J Comput Assist Radiol Surg 2020; 15:973-980. [PMID: 32342258 DOI: 10.1007/s11548-020-02159-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 04/03/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE We propose a novel methodology for generating synthetic X-rays from 2D RGB images. This method creates accurate simulations for use in non-diagnostic visualization problems where the only input comes from a generic camera. Traditional methods are restricted to using simulation algorithms on 3D computer models. To solve this problem, we propose a method of synthetic X-ray generation using conditional generative adversarial networks (CGANs). METHODS We create a custom synthetic X-ray dataset generator to generate image triplets for X-ray images, pose images, and RGB images of natural hand poses sampled from the NYU hand pose dataset. This dataset is used to train two general-purpose CGAN networks, pix2pix and CycleGAN, as well as our novel architecture called pix2xray which expands upon the pix2pix architecture to include the hand pose into the network. RESULTS Our results demonstrate that our pix2xray architecture outperforms both pix2pix and CycleGAN in producing higher-quality X-ray images. We measure higher similarity metrics in our approach, with pix2pix coming in second, and CycleGAN producing the worst results. Our network performs better in the difficult cases which involve high occlusion due to occluded poses or large rotations. CONCLUSION Overall our work establishes a baseline that synthetic X-rays can be simulated using 2D RGB input. We establish the need for additional data such as the hand pose to produce clearer results and show that future research must focus on more specialized architectures to improve overall image clarity and structure.
Collapse
Affiliation(s)
| | - Sergio Ledesma
- Faculty of Health Sciences, University of Ottawa, Ottawa, ON, Canada.,School of Engineering, University of Guanajuato, Salamanca, GTO, Mexico
| | - Sing Chun Lee
- Computer Aided Medical Procedures, Department of Computer Science, Johns Hopkins University, Baltimore, USA
| | - Matthias Seibold
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zürich, Zürich, Switzerland.,Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Phillipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zürich, Zürich, Switzerland
| | - Nassir Navab
- Computer Aided Medical Procedures, Department of Computer Science, Johns Hopkins University, Baltimore, USA.,Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Pascal Fallavollita
- Faculty of Engineering, University of Ottawa, Ottawa, ON, Canada.,Faculty of Health Sciences, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
16
|
Jafari MH, Girgis H, Van Woudenberg N, Moulson N, Luong C, Fung A, Balthazaar S, Jue J, Tsang M, Nair P, Gin K, Rohling R, Abolmaesumi P, Tsang T. Cardiac point-of-care to cart-based ultrasound translation using constrained CycleGAN. Int J Comput Assist Radiol Surg 2020; 15:877-886. [PMID: 32314226 DOI: 10.1007/s11548-020-02141-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 03/25/2020] [Indexed: 12/17/2022]
Abstract
PURPOSE The emerging market of cardiac handheld ultrasound (US) is on the rise. Despite the advantages in ease of access and the lower cost, a gap in image quality can still be observed between the echocardiography (echo) data captured by point-of-care ultrasound (POCUS) compared to conventional cart-based US, which limits the further adaptation of POCUS. In this work, we aim to present a machine learning solution based on recent advances in adversarial training to investigate the feasibility of translating POCUS echo images to the quality level of high-end cart-based US systems. METHODS We propose a constrained cycle-consistent generative adversarial architecture for unpaired translation of cardiac POCUS to cart-based US data. We impose a structured shape-wise regularization via a critic segmentation network to preserve the underlying shape of the heart during quality translation. The proposed deep transfer model is constrained to the anatomy of the left ventricle (LV) in apical two-chamber (AP2) echo views. RESULTS A total of 1089 echo studies from 841 patients are used in this study. The AP2 frames are captured by POCUS (Philips Lumify and Clarius) and cart-based (Philips iE33 and Vivid E9) US machines. The dataset of quality translation comprises a total of 441 echo studies from 395 patients. Data from both POCUS and cart-based systems of the same patient were available in 122 cases. The deep-quality transfer model is integrated into a pipeline for an automated cardiac evaluation task, namely segmentation of LV in AP2 view. By transferring the low-quality POCUS data to the cart-based US, a significant average improvement of 30% and 34 mm is obtained in the LV segmentation Dice score and Hausdorff distance metrics, respectively. CONCLUSION This paper presents the feasibility of a machine learning solution to transform the image quality of POCUS data to that of high-quality high-end cart-based systems. The experiments show that by leveraging the quality translation through the proposed constrained adversarial training, the accuracy of automatic segmentation with POCUS data could be improved.
Collapse
Affiliation(s)
| | - Hany Girgis
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | | | - Nathaniel Moulson
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Christina Luong
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Andrea Fung
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Shane Balthazaar
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - John Jue
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Micheal Tsang
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Parvathy Nair
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Ken Gin
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | | | | | - Teresa Tsang
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| |
Collapse
|
17
|
Yao K, Rochman ND, Sun SX. CTRL - a label-free artificial intelligence method for dynamic measurement of single-cell volume. J Cell Sci 2020; 133:jcs.245050. [PMID: 32094267 DOI: 10.1242/jcs.245050] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 02/10/2020] [Indexed: 12/27/2022] Open
Abstract
Measuring the physical size of a cell is valuable in understanding cell growth control. Current single-cell volume measurement methods for mammalian cells are labor intensive, inflexible and can cause cell damage. We introduce CTRL: Cell Topography Reconstruction Learner, a label-free technique incorporating the deep learning algorithm and the fluorescence exclusion method for reconstructing cell topography and estimating mammalian cell volume from differential interference contrast (DIC) microscopy images alone. The method achieves quantitative accuracy, requires minimal sample preparation, and applies to a wide range of biological and experimental conditions. The method can be used to track single-cell volume dynamics over arbitrarily long time periods. For HT1080 fibrosarcoma cells, we observe that the cell size at division is positively correlated with the cell size at birth (sizer), and there is a noticeable reduction in cell size fluctuations at 25% completion of the cell cycle in HT1080 fibrosarcoma cells.
Collapse
Affiliation(s)
- Kai Yao
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.,Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Nash D Rochman
- Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD 21218, USA.,Department of Chemical and Biomolecular Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Sean X Sun
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA .,Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, MD 21218, USA.,Physical Sciences in Oncology Center (PSOC), Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
18
|
Tmenova O, Martin R, Duong L. CycleGAN for style transfer in X-ray angiography. Int J Comput Assist Radiol Surg 2019; 14:1785-1794. [PMID: 31286396 DOI: 10.1007/s11548-019-02022-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 06/25/2019] [Indexed: 11/30/2022]
Abstract
PURPOSE We aim to perform generation of angiograms for various vascular structures as a mean of data augmentation in learning tasks. The task is to enhance the realism of vessels images generated from an anatomically realistic cardiorespiratory simulator to make them look like real angiographies. METHODS The enhancement is performed by applying the CycleGAN deep network for transferring the style of real angiograms acquired during percutaneous interventions into a data set composed of realistically simulated arteries. RESULTS The cycle consistency was evaluated by comparing an input simulated image with the one obtained after two cycles of image translation. An average structural similarity (SSIM) of 0.948 on our data sets has been obtained. The vessel preservation was measured by comparing segmentations of an input image and its corresponding enhanced image using Dice coefficient. CONCLUSIONS We proposed an application of the CycleGAN deep network for enhancing the artificial data as an alternative to classical data augmentation techniques for medical applications, particularly focused on angiogram generation. We discussed success and failure cases, explaining conditions for the realistic data augmentation which respects both the complex physiology of arteries and the various patterns and textures generated by X-ray angiography.
Collapse
Affiliation(s)
- Oleksandra Tmenova
- Department of Software and IT Engineering, École de technologie supérieure., 1100 Notre-Dame W., Montreal, Canada. .,Taras Shevchenko National University of Kyiv, Volodymyrska St, 60, Kyiv, Ukraine.
| | - Rémi Martin
- Department of Software and IT Engineering, École de technologie supérieure., 1100 Notre-Dame W., Montreal, Canada
| | - Luc Duong
- Department of Software and IT Engineering, École de technologie supérieure., 1100 Notre-Dame W., Montreal, Canada
| |
Collapse
|