1
|
Zhou W, Ji J, Cui W, Wang Y, Yi Y. Unsupervised Domain Adaptation Fundus Image Segmentation via Multi-Scale Adaptive Adversarial Learning. IEEE J Biomed Health Inform 2024; 28:5792-5803. [PMID: 38090822 DOI: 10.1109/jbhi.2023.3342422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Segmentation of the Optic Disc (OD) and Optic Cup (OC) is crucial for the early detection and treatment of glaucoma. Despite the strides made in deep neural networks, incorporating trained segmentation models for clinical application remains challenging due to domain shifts arising from disparities in fundus images across different healthcare institutions. To tackle this challenge, this study introduces an innovative unsupervised domain adaptation technique called Multi-scale Adaptive Adversarial Learning (MAAL), which consists of three key components. The Multi-scale Wasserstein Patch Discriminator (MWPD) module is designed to extract domain-specific features at multiple scales, enhancing domain classification performance and offering valuable guidance for the segmentation network. To further enhance model generalizability and explore domain-invariant features, we introduce the Adaptive Weighted Domain Constraint (AWDC) module. During training, this module dynamically assigns varying weights to different scales, allowing the model to adaptively focus on informative features. Furthermore, the Pixel-level Feature Enhancement (PFE) module enhances low-level features extracted at shallow network layers by incorporating refined high-level features. This integration ensures the preservation of domain-invariant information, effectively addressing domain variation and mitigating the loss of global features. Two publicly accessible fundus image databases are employed to demonstrate the effectiveness of our MAAL method in mitigating model degradation and improving segmentation performance. The achieved results outperform current state-of-the-art (SOTA) methods in both OD and OC segmentation.
Collapse
|
2
|
Jiang X, Yang Y, Su T, Xiao K, Lu L, Wang W, Guo C, Shao L, Wang M, Jiang D. Unsupervised domain adaptation based on feature and edge alignment for femur X-ray image segmentation. Comput Med Imaging Graph 2024; 116:102407. [PMID: 38880065 DOI: 10.1016/j.compmedimag.2024.102407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 05/24/2024] [Accepted: 05/24/2024] [Indexed: 06/18/2024]
Abstract
The gold standard for diagnosing osteoporosis is bone mineral density (BMD) measurement by dual-energy X-ray absorptiometry (DXA). However, various factors during the imaging process cause domain shifts in DXA images, which lead to incorrect bone segmentation. Research shows that poor bone segmentation is one of the prime reasons of inaccurate BMD measurement, severely affecting the diagnosis and treatment plans for osteoporosis. In this paper, we propose a Multi-feature Joint Discriminative Domain Adaptation (MDDA) framework to improve segmentation performance and the generalization of the network in domain-shifted images. The proposed method learns domain-invariant features between the source and target domains from the perspectives of multi-scale features and edges, and is evaluated on real data from multi-center datasets. Compared to other state-of-the-art methods, the feature prior from the source domain and edge prior enable the proposed MDDA to achieve the optimal domain adaptation performance and generalization. It also demonstrates superior performance in domain adaptation tasks on small amount datasets, even using only 5 or 10 images. In this study, MDDA provides an accurate bone segmentation tool for BMD measurement based on DXA imaging.
Collapse
Affiliation(s)
- Xiaoming Jiang
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Post and Telecommunications, Chongqing, China
| | - Yongxin Yang
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Post and Telecommunications, Chongqing, China
| | - Tong Su
- Department of Sports Medicine, Peking University Third Hospital, Institute of Sports Medicine of Peking University, Beijing Key Laboratory of Sports Injuries, Engineering Research Center of Sports Trauma Treatment Technology and Devices, Ministry of Education, No. 49 North Garden Road, Beijing, China
| | - Kai Xiao
- Department of Foot and Ankle Surgery, Wuhan Fourth Hospital, Wuhan, Hubei, China
| | - LiDan Lu
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Post and Telecommunications, Chongqing, China
| | - Wei Wang
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Post and Telecommunications, Chongqing, China
| | - Changsong Guo
- National Health Commission Capacity Building and Continuing Education Center, Beijing, China
| | - Lizhi Shao
- Chinese Academy of Sciences Key Laboratory of Molecular Imaging, Institute of Automation, Beijing 100190, China.
| | - Mingjing Wang
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou 325000, China.
| | - Dong Jiang
- Department of Sports Medicine, Peking University Third Hospital, Institute of Sports Medicine of Peking University, Beijing Key Laboratory of Sports Injuries, Engineering Research Center of Sports Trauma Treatment Technology and Devices, Ministry of Education, No. 49 North Garden Road, Beijing, China.
| |
Collapse
|
3
|
Shu L, Li M, Guo X, Chen Y, Pu X, Lin C. Isocentric fixed angle irradiation-based DRR: a novel approach to enhance x-ray and CT image registration. Phys Med Biol 2024; 69:115032. [PMID: 38684168 DOI: 10.1088/1361-6560/ad450a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 04/29/2024] [Indexed: 05/02/2024]
Abstract
Objective.Digitally reconstructed radiography (DRR) plays an important role in the registration of intraoperative x-ray and preoperative CT images. However, existing DRR algorithms often neglect the critical isocentric fixed angle irradiation (IFAI) principle in C-arm imaging, resulting in inaccurate simulation of x-ray images. This limitation degrades registration algorithms relying on DRR image libraries or employing DRR images (DRRs) to train neural network models. To address this issue, we propose a novel IFAI-based DRR method that accurately captures the true projection transformation during x-ray imaging of the human body.Approach.By strictly adhering to the IFAI principle and utilizing known parameters from intraoperative x-ray images paired with CT scans, our method successfully simulates the real projection transformation and generates DRRs that closely resemble actual x-ray images.Main result.Experimental results validate the effectiveness of our IFAI-based DRR method by successfully registering intraoperative x-ray images with preoperative CT images from multiple patients who underwent thoracic endovascular aortic procedures.Significance. The proposed IFAI-based DRR method enhances the quality of DRR images, significantly accelerates the construction of DRR image libraries, and thereby improves the performance of x-ray and CT image registration. Additionally, the method has the generality of registering CT and x-ray images generated by large C-arm devices.
Collapse
Affiliation(s)
- Lixia Shu
- Beijing Institute of Heart, Lung and Blood Vessel Diseases, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Meng Li
- Beijing Institute of Heart, Lung and Blood Vessel Diseases, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Xi Guo
- The Large Vessel Center, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Yu Chen
- The Large Vessel Center, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Xin Pu
- The Large Vessel Center, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Changyan Lin
- Beijing Institute of Heart, Lung and Blood Vessel Diseases, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| |
Collapse
|
4
|
Krishna A, Yenneti S, Wang G, Mueller K. Image factory: A method for synthesizing novel CT images with anatomical guidance. Med Phys 2024; 51:3464-3479. [PMID: 38043097 DOI: 10.1002/mp.16864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 09/26/2023] [Accepted: 10/30/2023] [Indexed: 12/05/2023] Open
Abstract
BACKGROUND Deep learning in medical applications is limited due to the low availability of large labeled, annotated, or segmented training datasets. With the insufficient data available for model training comes the inability of these networks to learn the fine nuances of the space of possible images in a given medical domain, leading to the possible suppression of important diagnostic features hence making these deep learning systems suboptimal in their performance and vulnerable to adversarial attacks. PURPOSE We formulate a framework to address this lack of labeled data problem. We test this formulation in computed tomographic images domain and present an approach that can synthesize large sets of novel CT images at high resolution across the full Hounsfield (HU) range. METHODS Our method only requires a small annotated dataset of lung CT from 30 patients (available online at the TCIA) and a large nonannotated dataset with high resolution CT images from 14k patients (received from NIH, not publicly available). It then converts the small annotated dataset into a large annotated dataset, using a sequence of steps including texture learning via StyleGAN, label learning via U-Net and semi-supervised learning via CycleGAN/Pixel-to-Pixel (P2P) architectures. The large annotated dataset so generated can then be used for the training of deep learning networks for medical applications. It can also be put to use for the synthesis of CT images with varied anatomies that were nonexistent within either of the input datasets, enriching the dataset even further. RESULTS We demonstrate our framework via lung CT-Scan synthesis along with their novel generated annotations and compared it with other state of the art generative models that only produce images without annotations. We evaluate our framework effectiveness via a visual turing test with help of a few doctors and radiologists. CONCLUSIONS We gain the capability of generating an unlimited amount of annotated CT images. Our approach works for all HU windows with minimal depreciation in anatomical plausibility and hence could be used as a general purpose framework for annotated data augmentation for deep learning applications in medical imaging.
Collapse
Affiliation(s)
- Arjun Krishna
- Computer Science Department, Stony Brook University, Stony Brook, New York, USA
| | - Shanmukha Yenneti
- Computer Science Department, Stony Brook University, Stony Brook, New York, USA
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Klaus Mueller
- Computer Science Department, Stony Brook University, Stony Brook, New York, USA
| |
Collapse
|
5
|
Yang X, Chin BB, Silosky M, Wehrend J, Litwiller DV, Ghosh D, Xing F. Learning Without Real Data Annotations to Detect Hepatic Lesions in PET Images. IEEE Trans Biomed Eng 2024; 71:679-688. [PMID: 37708016 DOI: 10.1109/tbme.2023.3315268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
OBJECTIVE Deep neural networks have been recently applied to lesion identification in fluorodeoxyglucose (FDG) positron emission tomography (PET) images, but they typically rely on a large amount of well-annotated data for model training. This is extremely difficult to achieve for neuroendocrine tumors (NETs), because of low incidence of NETs and expensive lesion annotation in PET images. The objective of this study is to design a novel, adaptable deep learning method, which uses no real lesion annotations but instead low-cost, list mode-simulated data, for hepatic lesion detection in real-world clinical NET PET images. METHODS We first propose a region-guided generative adversarial network (RG-GAN) for lesion-preserved image-to-image translation. Then, we design a specific data augmentation module for our list-mode simulated data and incorporate this module into the RG-GAN to improve model training. Finally, we combine the RG-GAN, the data augmentation module and a lesion detection neural network into a unified framework for joint-task learning to adaptatively identify lesions in real-world PET data. RESULTS The proposed method outperforms recent state-of-the-art lesion detection methods in real clinical 68Ga-DOTATATE PET images, and produces very competitive performance with the target model that is trained with real lesion annotations. CONCLUSION With RG-GAN modeling and specific data augmentation, we can obtain good lesion detection performance without using any real data annotations. SIGNIFICANCE This study introduces an adaptable deep learning method for hepatic lesion identification in NETs, which can significantly reduce human effort for data annotation and improve model generalizability for lesion detection with PET imaging.
Collapse
|
6
|
Zhang J, Huang X, Liu Y, Han Y, Xiang Z. GAN-based medical image small region forgery detection via a two-stage cascade framework. PLoS One 2024; 19:e0290303. [PMID: 38166011 PMCID: PMC10760893 DOI: 10.1371/journal.pone.0290303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 08/06/2023] [Indexed: 01/04/2024] Open
Abstract
Using generative adversarial network (GAN) Goodfellow et al. (2014) for data enhancement of medical images is significantly helpful for many computer-aided diagnosis (CAD) tasks. A new GAN-based automated tampering attack, like CT-GAN Mirsky et al. (2019), has emerged. It can inject or remove lung cancer lesions to CT scans. Because the tampering region may even account for less than 1% of the original image, even state-of-the-art methods are challenging to detect the traces of such tampering. This paper proposes a two-stage cascade framework to detect GAN-based medical image small region forgery like CT-GAN. In the local detection stage, we train the detector network with small sub-images so that interference information in authentic regions will not affect the detector. We use depthwise separable convolution and residual networks to prevent the detector from over-fitting and enhance the ability to find forged regions through the attention mechanism. The detection results of all sub-images in the same image will be combined into a heatmap. In the global classification stage, using gray-level co-occurrence matrix (GLCM) can better extract features of the heatmap. Because the shape and size of the tampered region are uncertain, we use hyperplanes in an infinite-dimensional space for classification. Our method can classify whether a CT image has been tampered and locate the tampered position. Sufficient experiments show that our method can achieve excellent performance than the state-of-the-art detection methods.
Collapse
Affiliation(s)
- Jianyi Zhang
- Beijing Electronic Science and Technology Institute, Beijing, China
- University of Louisiana at Lafayette, Lafayette, Louisiana, United States of America
| | - Xuanxi Huang
- Beijing Electronic Science and Technology Institute, Beijing, China
| | - Yaqi Liu
- Beijing Electronic Science and Technology Institute, Beijing, China
| | - Yuyang Han
- Beijing Electronic Science and Technology Institute, Beijing, China
| | - Zixiao Xiang
- Beijing Electronic Science and Technology Institute, Beijing, China
| |
Collapse
|
7
|
Martin R, Segars P, Samei E, Miró J, Duong L. Unsupervised synthesis of realistic coronary artery X-ray angiogram. Int J Comput Assist Radiol Surg 2023; 18:2329-2338. [PMID: 37336801 PMCID: PMC10786317 DOI: 10.1007/s11548-023-02982-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 06/01/2023] [Indexed: 06/21/2023]
Abstract
PURPOSE Medical image analysis suffers from a sparsity of annotated data necessary in learning-based models. Cardiorespiratory simulators have been developed to counter the lack of data. However, the resulting data often lack realism. Hence, the proposed method aims to synthesize realistic and fully customizable angiograms of coronary arteries for the training of learning-based biomedical tasks, for cardiologists performing interventions, and for cardiologist trainees. METHODS 3D models of coronary arteries are generated with a fully customizable realistic cardiorespiratory simulator. The transfer of X-ray angiography style to simulator-generated images is performed using a new vessel-specific adaptation of the CycleGAN model. The CycleGAN model is paired with a vesselness-based loss function that is designed as a vessel-specific structural integrity constraint. RESULTS Validation is performed both on the style and on the preservation of the shape of the arteries of the images. The results show a PSNR of 14.125, an SSIM of 0.898, and an overlapping of 89.5% using the Dice coefficient. CONCLUSION We proposed a novel fluoroscopy-based style transfer method for the enhancement of the realism of simulated coronary artery angiograms. The results show that the proposed model is capable of accurately transferring the style of X-ray angiograms to the simulations while keeping the integrity of the structures of interest (i.e., the topology of the coronary arteries).
Collapse
Affiliation(s)
- Rémi Martin
- Department of Software and Information Technology Engineering, École de Technologie Supérieure, 1100 Notre-Dame, Montréal, QC, H3C 1K3, Canada.
| | - Paul Segars
- Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, 2424 Erwin Road, Durham, NC, 27705, USA
| | - Ehsan Samei
- Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, 2424 Erwin Road, Durham, NC, 27705, USA
| | - Joaquim Miró
- Department of Pediatrics, CHU Sainte-Justine, 3175 Chem. de la Côte-Sainte-Catherine, Montréal, QC, H3T 1C5, Canada
| | - Luc Duong
- Department of Software and Information Technology Engineering, École de Technologie Supérieure, 1100 Notre-Dame, Montréal, QC, H3C 1K3, Canada
| |
Collapse
|
8
|
Gerard SE, Chaudhary MFA, Herrmann J, Christensen GE, Estépar RSJ, Reinhardt JM, Hoffman EA. Direct estimation of regional lung volume change from paired and single CT images using residual regression neural network. Med Phys 2023; 50:5698-5714. [PMID: 36929883 PMCID: PMC10743098 DOI: 10.1002/mp.16365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 02/11/2023] [Accepted: 03/01/2023] [Indexed: 03/18/2023] Open
Abstract
BACKGROUND Chest computed tomography (CT) enables characterization of pulmonary diseases by producing high-resolution and high-contrast images of the intricate lung structures. Deformable image registration is used to align chest CT scans at different lung volumes, yielding estimates of local tissue expansion and contraction. PURPOSE We investigated the utility of deep generative models for directly predicting local tissue volume change from lung CT images, bypassing computationally expensive iterative image registration and providing a method that can be utilized in scenarios where either one or two CT scans are available. METHODS A residual regression convolutional neural network, called Reg3DNet+, is proposed for directly regressing high-resolution images of local tissue volume change (i.e., Jacobian) from CT images. Image registration was performed between lung volumes at total lung capacity (TLC) and functional residual capacity (FRC) using a tissue mass- and structure-preserving registration algorithm. The Jacobian image was calculated from the registration-derived displacement field and used as the ground truth for local tissue volume change. Four separate Reg3DNet+ models were trained to predict Jacobian images using a multifactorial study design to compare the effects of network input (i.e., single image vs. paired images) and output space (i.e., FRC vs. TLC). The models were trained and evaluated on image datasets from the COPDGene study. Models were evaluated against the registration-derived Jacobian images using local, regional, and global evaluation metrics. RESULTS Statistical analysis revealed that both factors - network input and output space - were significant determinants for change in evaluation metrics. Paired-input models performed better than single-input models, and model performance was better in the output space of FRC rather than TLC. Mean structural similarity index for paired-input models was 0.959 and 0.956 for FRC and TLC output spaces, respectively, and for single-input models was 0.951 and 0.937. Global evaluation metrics demonstrated correlation between registration-derived Jacobian mean and predicted Jacobian mean: coefficient of determination (r2 ) for paired-input models was 0.974 and 0.938 for FRC and TLC output spaces, respectively, and for single-input models was 0.598 and 0.346. After correcting for effort, registration-derived lobar volume change was strongly correlated with the predicted lobar volume change: for paired-input models r2 was 0.899 for both FRC and TLC output spaces, and for single-input models r2 was 0.803 and 0.862, respectively. CONCLUSIONS Convolutional neural networks can be used to directly predict local tissue mechanics, eliminating the need for computationally expensive image registration. Networks that use paired CT images acquired at TLC and FRC allow for more accurate prediction of local tissue expansion compared to networks that use a single image. Networks that only require a single input image still show promising results, particularly after correcting for effort, and allow for local tissue expansion estimation in cases where multiple CT scans are not available. For single-input networks, the FRC image is more predictive of local tissue volume change compared to the TLC image.
Collapse
Affiliation(s)
- Sarah E. Gerard
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| | | | - Jacob Herrmann
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
| | - Gary E. Christensen
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiation Oncology, University of Iowa, Iowa City, Iowa, USA
| | | | - Joseph M. Reinhardt
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| | - Eric A. Hoffman
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
9
|
Aubert B, Cresson T, de Guise JA, Vazquez C. X-Ray to DRR Images Translation for Efficient Multiple Objects Similarity Measures in Deformable Model 3D/2D Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:897-909. [PMID: 36318556 DOI: 10.1109/tmi.2022.3218568] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The robustness and accuracy of the intensity-based 3D/2D registration of a 3D model on planar X-ray image(s) is related to the quality of the image correspondences between the digitally reconstructed radiographs (DRR) generated from the 3D models (varying image) and the X-ray images (fixed target). While much effort may be devoted to generating realistic DRR that are similar to real X-rays (using complex X-ray simulation, adding densities information in 3D models, etc.), significant differences still remain between DRR and real X-ray images. Differences such as the presence of adjacent or superimposed soft tissue and bony or foreign structures lead to image matching difficulties and decrease the 3D/2D registration performance. In the proposed method, the X-ray images were converted into DRR images using a GAN-based cross-modality image-to-images translation. With this added prior step of XRAY-to-DRR translation, standard similarity measures become efficient even when using simple and fast DRR projection. For both images to match, they must belong to the same image domain and essentially contain the same kind of information. The XRAY-to-DRR translation also addresses the well-known issue of registering an object in a scene composed of multiple objects by separating the superimposed or/and adjacent objects to avoid mismatching across similar structures. We applied the proposed method to the 3D/2D fine registration of vertebra deformable models to biplanar radiographs of the spine. We showed that the XRAY-to-DRR translation enhances the registration results, by increasing the capture range and decreasing dependence on the similarity measure choice since the multi-modal registration becomes mono-modal.
Collapse
|
10
|
Karimi D, Gholipour A. Improving Calibration and Out-of-Distribution Detection in Deep Models for Medical Image Segmentation. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE 2023; 4:383-397. [PMID: 37868336 PMCID: PMC10586223 DOI: 10.1109/tai.2022.3159510] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2023]
Abstract
Convolutional Neural Networks (CNNs) have proved to be powerful medical image segmentation models. In this study, we address some of the main unresolved issues regarding these models. Specifically, training of these models on small medical image datasets is still challenging, with many studies promoting techniques such as transfer learning. Moreover, these models are infamous for producing over-confident predictions and for failing silently when presented with out-of-distribution (OOD) test data. In this paper, for improving prediction calibration we advocate for multi-task learning, i.e., training a single model on several different datasets, spanning different organs of interest and different imaging modalities. We show that multi-task learning can significantly improve model confidence calibration. For OOD detection, we propose a novel method based on spectral analysis of CNN feature maps. We show that different datasets, representing different imaging modalities and/or different organs of interest, have distinct spectral signatures, which can be used to identify whether or not a test image is similar to the images used for training. We show that our proposed method is more accurate than several competing methods, including methods based on prediction uncertainty and image classification.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Radiology, Boston Children's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Ali Gholipour
- Department of Radiology, Boston Children's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
11
|
Zhao Z, Zhou F, Xu K, Zeng Z, Guan C, Zhou SK. LE-UDA: Label-Efficient Unsupervised Domain Adaptation for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:633-646. [PMID: 36227829 DOI: 10.1109/tmi.2022.3214766] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
While deep learning methods hitherto have achieved considerable success in medical image segmentation, they are still hampered by two limitations: (i) reliance on large-scale well-labeled datasets, which are difficult to curate due to the expert-driven and time-consuming nature of pixel-level annotations in clinical practices, and (ii) failure to generalize from one domain to another, especially when the target domain is a different modality with severe domain shifts. Recent unsupervised domain adaptation (UDA) techniques leverage abundant labeled source data together with unlabeled target data to reduce the domain gap, but these methods degrade significantly with limited source annotations. In this study, we address this underexplored UDA problem, investigating a challenging but valuable realistic scenario, where the source domain not only exhibits domain shift w.r.t. the target domain but also suffers from label scarcity. In this regard, we propose a novel and generic framework called "Label-Efficient Unsupervised Domain Adaptation" (LE-UDA). In LE-UDA, we construct self-ensembling consistency for knowledge transfer between both domains, as well as a self-ensembling adversarial learning module to achieve better feature alignment for UDA. To assess the effectiveness of our method, we conduct extensive experiments on two different tasks for cross-modality segmentation between MRI and CT images. Experimental results demonstrate that the proposed LE-UDA can efficiently leverage limited source labels to improve cross-domain segmentation performance, outperforming state-of-the-art UDA approaches in the literature.
Collapse
|
12
|
Gao C, Killeen BD, Hu Y, Grupp RB, Taylor RH, Armand M, Unberath M. Synthetic data accelerates the development of generalizable learning-based algorithms for X-ray image analysis. NAT MACH INTELL 2023; 5:294-308. [PMID: 38523605 PMCID: PMC10959504 DOI: 10.1038/s42256-023-00629-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 02/06/2023] [Indexed: 03/26/2024]
Abstract
Artificial intelligence (AI) now enables automated interpretation of medical images. However, AI's potential use for interventional image analysis remains largely untapped. This is because the post hoc analysis of data collected during live procedures has fundamental and practical limitations, including ethical considerations, expense, scalability, data integrity and a lack of ground truth. Here we demonstrate that creating realistic simulated images from human models is a viable alternative and complement to large-scale in situ data collection. We show that training AI image analysis models on realistically synthesized data, combined with contemporary domain generalization techniques, results in machine learning models that on real data perform comparably to models trained on a precisely matched real data training set. We find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real-data-trained models due to the effectiveness of training on a larger dataset. SyntheX provides an opportunity to markedly accelerate the conception, design and evaluation of X-ray-based intelligent systems. In addition, SyntheX provides the opportunity to test novel instrumentation, design complementary surgical approaches, and envision novel techniques that improve outcomes, save time or mitigate human error, free from the ethical and practical considerations of live human data collection.
Collapse
Affiliation(s)
- Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Benjamin D. Killeen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Robert B. Grupp
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins Applied Physics Laboratory, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
13
|
Yang J, Yang J, Wang S, Cao S, Zou H, Xie L. Advancing Imbalanced Domain Adaptation: Cluster-Level Discrepancy Minimization With a Comprehensive Benchmark. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1106-1117. [PMID: 34398781 DOI: 10.1109/tcyb.2021.3093888] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Unsupervised domain adaptation methods have been proposed to tackle the problem of covariate shift by minimizing the distribution discrepancy between the feature embeddings of source domain and target domain. However, the standard evaluation protocols assume that the conditional label distributions of the two domains are invariant, which is usually not consistent with the real-world scenarios such as long-tailed distribution of visual categories. In this article, the imbalanced domain adaptation (IDA) is formulated for a more realistic scenario where both label shift and covariate shift occur between the two domains. Theoretically, when label shift exists, aligning the marginal distributions may result in negative transfer. Therefore, a novel cluster-level discrepancy minimization (CDM) is developed. CDM proposes cross-domain similarity learning to learn tight and discriminative clusters, which are utilized for both feature-level and distribution-level discrepancy minimization, palliating the negative effect of label shift during domain transfer. Theoretical justifications further demonstrate that CDM minimizes the target risk in a progressive manner. To corroborate the effectiveness of CDM, we propose two evaluation protocols according to the real-world situation and benchmark existing domain adaptation approaches. Extensive experiments demonstrate that negative transfer does occur due to label shift, while our approach achieves significant improvement on imbalanced datasets, including Office-31, Image-CLEF, and Office-Home.
Collapse
|
14
|
Keaton MR, Zaveri RJ, Doretto G. CellTranspose: Few-shot Domain Adaptation for Cellular Instance Segmentation. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2023; 2023:455-466. [PMID: 38170053 PMCID: PMC10760785 DOI: 10.1109/wacv56688.2023.00053] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
Automated cellular instance segmentation is a process utilized for accelerating biological research for the past two decades, and recent advancements have produced higher quality results with less effort from the biologist. Most current endeavors focus on completely cutting the researcher out of the picture by generating highly generalized models. However, these models invariably fail when faced with novel data, distributed differently than the ones used for training. Rather than approaching the problem with methods that presume the availability of large amounts of target data and computing power for retraining, in this work we address the even greater challenge of designing an approach that requires minimal amounts of new annotated data as well as training time. We do so by designing specialized contrastive losses that leverage the few annotated samples very efficiently. A large set of results show that 3 to 5 annotations lead to models with accuracy that: 1) significantly mitigate the covariate shift effects; 2) matches or surpasses other adaptation methods; 3) even approaches methods that have been fully retrained on the target distribution. The adaptation training is only a few minutes, paving a path towards a balance between model performance, computing requirements and expert-level annotation needs.
Collapse
|
15
|
Yang H, Chen C, Jiang M, Liu Q, Cao J, Heng PA, Dou Q. DLTTA: Dynamic Learning Rate for Test-Time Adaptation on Cross-Domain Medical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3575-3586. [PMID: 35839185 DOI: 10.1109/tmi.2022.3191535] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Test-time adaptation (TTA) has increasingly been an important topic to efficiently tackle the cross-domain distribution shift at test time for medical images from different institutions. Previous TTA methods have a common limitation of using a fixed learning rate for all the test samples. Such a practice would be sub-optimal for TTA, because test data may arrive sequentially therefore the scale of distribution shift would change frequently. To address this problem, we propose a novel dynamic learning rate adjustment method for test-time adaptation, called DLTTA, which dynamically modulates the amount of weights update for each test image to account for the differences in their distribution shift. Specifically, our DLTTA is equipped with a memory bank based estimation scheme to effectively measure the discrepancy of a given test sample. Based on this estimated discrepancy, a dynamic learning rate adjustment strategy is then developed to achieve a suitable degree of adaptation for each test sample. The effectiveness and general applicability of our DLTTA is extensively demonstrated on three tasks including retinal optical coherence tomography (OCT) segmentation, histopathological image classification, and prostate 3D MRI segmentation. Our method achieves effective and fast test-time adaptation with consistent performance improvement over current state-of-the-art test-time adaptation methods. Code is available at https://github.com/med-air/DLTTA.
Collapse
|
16
|
Xing F, Cornish TC. Low-Resource Adversarial Domain Adaptation for Cross-Modality Nucleus Detection. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13437:639-649. [PMID: 36383499 PMCID: PMC9648428 DOI: 10.1007/978-3-031-16449-1_61] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Due to domain shifts, deep cell/nucleus detection models trained on one microscopy image dataset might not be applicable to other datasets acquired with different imaging modalities. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently been exploited to close domain gaps and has achieved excellent nucleus detection performance. However, current GAN-based UDA model training often requires a large amount of unannotated target data, which may be prohibitively expensive to obtain in real practice. Additionally, these methods have significant performance degradation when using limited target training data. In this paper, we study a more realistic yet challenging UDA scenario, where (unannotated) target training data is very scarce, a low-resource case rarely explored for nucleus detection in previous work. Specifically, we augment a dual GAN network by leveraging a task-specific model to supplement the target-domain discriminator and facilitate generator learning with limited data. The task model is constrained by cross-domain prediction consistency to encourage semantic content preservation for image-to-image translation. Next, we incorporate a stochastic, differentiable data augmentation module into the task-augmented GAN network to further improve model training by alleviating discriminator overfitting. This data augmentation module is a plug-and-play component, requiring no modification of network architectures or loss functions. We evaluate the proposed low-resource UDA method for nucleus detection on multiple public cross-modality microscopy image datasets. With a single training image in the target domain, our method significantly outperforms recent state-of-the-art UDA approaches and delivers very competitive or superior performance over fully supervised models trained with real labeled target data.
Collapse
Affiliation(s)
- Fuyong Xing
- Depatment of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus
| |
Collapse
|
17
|
A multimodal domain adaptive segmentation framework for IDH genotype prediction. Int J Comput Assist Radiol Surg 2022; 17:1923-1931. [PMID: 35794409 DOI: 10.1007/s11548-022-02700-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 06/05/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE The gene mutation status of isocitrate dehydrogenase (IDH) in gliomas leads to a different prognosis. It is challenging to perform automated tumor segmentation and genotype prediction directly using label-deprived multimodal magnetic resonance (MR) images. We propose a novel framework that employs a domain adaptive mechanism to address this issue. METHODS Multimodal domain adaptive segmentation (MDAS) framework was proposed to solve the gap issue in cross dataset model transfer. Image translation was used to adaptively align the multimodal data from two domains at the image level, and segmentation consistency loss was proposed to retain more pathological information through semantic constraints. The data distribution between the labeled public dataset and label-free target dataset was learned to achieve better unsupervised segmentation results on the target dataset. Then, the segmented tumor foci were used as a mask to extract the radiomics and deep features. And the subsequent prediction of IDH gene mutation status was conducted by training a random forest classifier. The prediction model does not need any expert segmented labels. RESULTS We implemented our method on the public BraTS 2019 dataset and 110 astrocytoma cases of grade II-IV brain tumors from our hospital. We obtained a Dice score of 77.41% for unsupervised tumor segmentation, a genotype prediction accuracy (ACC) of 0.7639 and an area under curve (AUC) of 0.8600. Experimental results demonstrate that our domain adaptive approach outperforms the methods utilizing direct transfer learning. The model using hybrid features gives better results than the model using radiomics or deep features alone. CONCLUSIONS Domain adaptation enables the segmentation network to achieve better performance, and the extraction of mixed features at multiple levels on the segmented region of interest ensures effective prediction of the IDH gene mutation status.
Collapse
|
18
|
MinimalGAN: diverse medical image synthesis for data augmentation using minimal training data. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03609-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
19
|
Bian C, Yuan C, Ma K, Yu S, Wei D, Zheng Y. Domain Adaptation Meets Zero-Shot Learning: An Annotation-Efficient Approach to Multi-Modality Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1043-1056. [PMID: 34843432 DOI: 10.1109/tmi.2021.3131245] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Due to the lack of properly annotated medical data, exploring the generalization capability of the deep model is becoming a public concern. Zero-shot learning (ZSL) has emerged in recent years to equip the deep model with the ability to recognize unseen classes. However, existing studies mainly focus on natural images, which utilize linguistic models to extract auxiliary information for ZSL. It is impractical to apply the natural image ZSL solutions directly to medical images, since the medical terminology is very domain-specific, and it is not easy to acquire linguistic models for the medical terminology. In this work, we propose a new paradigm of ZSL specifically for medical images utilizing cross-modality information. We make three main contributions with the proposed paradigm. First, we extract the prior knowledge about the segmentation targets, called relation prototypes, from the prior model and then propose a cross-modality adaptation module to inherit the prototypes to the zero-shot model. Second, we propose a relation prototype awareness module to make the zero-shot model aware of information contained in the prototypes. Last but not least, we develop an inheritance attention module to recalibrate the relation prototypes to enhance the inheritance process. The proposed framework is evaluated on two public cross-modality datasets including a cardiac dataset and an abdominal dataset. Extensive experiments show that the proposed framework significantly outperforms the state of the arts.
Collapse
|
20
|
Jiang K, Quan L, Gong T. Disentangled representation and cross-modality image translation based unsupervised domain adaptation method for abdominal organ segmentation. Int J Comput Assist Radiol Surg 2022; 17:1101-1113. [PMID: 35301702 DOI: 10.1007/s11548-022-02590-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 03/02/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Existing medical image segmentation models tend to achieve satisfactory performance when the training and test data are drawn from the same distribution, while they often produce significant performance degradation when used for the evaluation of cross-modality data. To facilitate the deployment of deep learning models in real-world medical scenarios and to mitigate the performance degradation caused by domain shift, we propose an unsupervised cross-modality segmentation framework based on representation disentanglement and image-to-image translation. METHODS Our approach is based on a multimodal image translation framework, which assumes that the latent space of images can be decomposed into a content space and a style space. First, image representations are decomposed into the content and style codes by the encoders and recombined to generate cross-modality images. Second, we propose content and style reconstruction losses to preserve consistent semantic information from original images and construct content discriminators to match the content distributions between source and target domains. Synthetic images with target domain style and source domain anatomical structures are then utilized for training of the segmentation model. RESULTS We applied our framework to the bidirectional adaptation experiments on MRI and CT images of abdominal organs. Compared to the case without adaptation, the Dice similarity coefficient (DSC) increased by almost 30 and 25% and average symmetric surface distance (ASSD) dropped by 13.3 and 12.2, respectively. CONCLUSION The proposed unsupervised domain adaptation framework can effectively improve the performance of cross-modality segmentation, and minimize the negative impact of domain shift. Furthermore, the translated image retains semantic information and anatomical structure. Our method significantly outperforms several competing methods.
Collapse
Affiliation(s)
- Kaida Jiang
- College of Information Science and Technology, Donghua University, Shanghai, China
| | - Li Quan
- College of Information Science and Technology, Donghua University, Shanghai, China
| | - Tao Gong
- College of Information Science and Technology, Donghua University, Shanghai, China.
| |
Collapse
|
21
|
Chen J, Zhang Z, Xie X, Li Y, Xu T, Ma K, Zheng Y. Beyond Mutual Information: Generative Adversarial Network for Domain Adaptation Using Information Bottleneck Constraint. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:595-607. [PMID: 34606453 DOI: 10.1109/tmi.2021.3117996] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Medical images from multicentres often suffer from the domain shift problem, which makes the deep learning models trained on one domain usually fail to generalize well to another. One of the potential solutions for the problem is the generative adversarial network (GAN), which has the capacity to translate images between different domains. Nevertheless, the existing GAN-based approaches are prone to fail at preserving image-objects in image-to-image (I2I) translation, which reduces their practicality on domain adaptation tasks. In this regard, a novel GAN (namely IB-GAN) is proposed to preserve image-objects during cross-domain I2I adaptation. Specifically, we integrate the information bottleneck constraint into the typical cycle-consistency-based GAN to discard the superfluous information (e.g., domain information) and maintain the consistency of disentangled content features for image-object preservation. The proposed IB-GAN is evaluated on three tasks-polyp segmentation using colonoscopic images, the segmentation of optic disc and cup in fundus images and the whole heart segmentation using multi-modal volumes. We show that the proposed IB-GAN can generate realistic translated images and remarkably boost the generalization of widely used segmentation networks (e.g., U-Net).
Collapse
|
22
|
Abstract
Machine learning techniques used in computer-aided medical image analysis usually suffer from the domain shift problem caused by different distributions between source/reference data and target data. As a promising solution, domain adaptation has attracted considerable attention in recent years. The aim of this paper is to survey the recent advances of domain adaptation methods in medical image analysis. We first present the motivation of introducing domain adaptation techniques to tackle domain heterogeneity issues for medical image analysis. Then we provide a review of recent domain adaptation models in various medical image analysis tasks. We categorize the existing methods into shallow and deep models, and each of them is further divided into supervised, semi-supervised and unsupervised methods. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges and future directions of this energetic research field.
Collapse
|
23
|
Zheng S, Yang X, Wang Y, Ding M, Hou W. Unsupervised Cross-Modality Domain Adaptation Network for CNN-Based X-ray to CT. IEEE J Biomed Health Inform 2021; 26:2637-2647. [PMID: 34914602 DOI: 10.1109/jbhi.2021.3135890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
2D/3D registration that achieves high accuracy and real-time computation is one of the enabling technologies for radiotherapy and image-guided surgeries. Recently, the Convolutional Neural Network(CNN) has been explored to significantly improve the accuracy and efficiency of 2D/3D registration. A pair of intraoperative 2-D x-ray images and synthetic data from pre-operative volume are often required to model the nonconvex mappings between registration parameters and image residual. However, a large clinical dataset collection with accurate poses for x-ray images can be very challenging or even impractical, while exclusive training on synthetic data can frequently cause performance degradation when tested on x-rays. Thus, we propose to train a model on source domain (i.e., synthetic data) to build appearance-pose relationship first and then use an unsupervised cross-modality domain adaptation network (UCMDAN) to adapt the model to target domain (i.e., X-rays) through adversarial learning. We propose to narrow the significant domain gap by alignment in both pixel and feature space. In particular, the image appearance transformation and domain-invariance feature learning by multiple aspects are conducted synergistically. Extensive experiments on CT and CBCT dataset show that the proposed UCMDAN outperforms the existing state-of-the-art domain adaptation approaches.
Collapse
|
24
|
Cui Z, Li C, Du Z, Chen N, Wei G, Chen R, Yang L, Shen D, Wang W. Structure-Driven Unsupervised Domain Adaptation for Cross-Modality Cardiac Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3604-3616. [PMID: 34161240 DOI: 10.1109/tmi.2021.3090432] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Performance degradation due to domain shift remains a major challenge in medical image analysis. Unsupervised domain adaptation that transfers knowledge learned from the source domain with ground truth labels to the target domain without any annotation is the mainstream solution to resolve this issue. In this paper, we present a novel unsupervised domain adaptation framework for cross-modality cardiac segmentation, by explicitly capturing a common cardiac structure embedded across different modalities to guide cardiac segmentation. In particular, we first extract a set of 3D landmarks, in a self-supervised manner, to represent the cardiac structure of different modalities. The high-level structure information is then combined with another complementary feature, the Canny edges, to produce accurate cardiac segmentation results both in the source and target domains. We extensively evaluate our method on the MICCAI 2017 MM-WHS dataset for cardiac segmentation. The evaluation, comparison and comprehensive ablation studies demonstrate that our approach achieves satisfactory segmentation results and outperforms state-of-the-art unsupervised domain adaptation methods by a significant margin.
Collapse
|
25
|
Chen H, Shi Y, Bo B, Zhao D, Miao P, Tong S, Wang C. Real-Time Cerebral Vessel Segmentation in Laser Speckle Contrast Image Based on Unsupervised Domain Adaptation. Front Neurosci 2021; 15:755198. [PMID: 34916898 PMCID: PMC8669333 DOI: 10.3389/fnins.2021.755198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 10/20/2021] [Indexed: 12/02/2022] Open
Abstract
Laser speckle contrast imaging (LSCI) is a full-field, high spatiotemporal resolution and low-cost optical technique for measuring blood flow, which has been successfully used for neurovascular imaging. However, due to the low signal-noise ratio and the relatively small sizes, segmenting the cerebral vessels in LSCI has always been a technical challenge. Recently, deep learning has shown its advantages in vascular segmentation. Nonetheless, ground truth by manual labeling is usually required for training the network, which makes it difficult to implement in practice. In this manuscript, we proposed a deep learning-based method for real-time cerebral vessel segmentation of LSCI without ground truth labels, which could be further integrated into intraoperative blood vessel imaging system. Synthetic LSCI images were obtained with a synthesis network from LSCI images and public labeled dataset of Digital Retinal Images for Vessel Extraction, which were then used to train the segmentation network. Using matching strategies to reduce the size discrepancy between retinal images and laser speckle contrast images, we could further significantly improve image synthesis and segmentation performance. In the testing LSCI images of rodent cerebral vessels, the proposed method resulted in a dice similarity coefficient of over 75%.
Collapse
Affiliation(s)
- Heping Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- School of Technology and Health, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Yan Shi
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Bin Bo
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Denghui Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Peng Miao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Shanbao Tong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chunliang Wang
- School of Technology and Health, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
26
|
Ma B, Yang Q, Cui H, Ma J. MEAL: Meta enhanced Entropy-driven Adversarial Learning for Optic Disc and Cup Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3273-3276. [PMID: 34891939 DOI: 10.1109/embc46164.2021.9630517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Accurate segmentation of optic disc (OD) and optic cup (OC) can assist the effective and efficient diagnosis of glaucoma. The domain shift caused by cross-domain data, however, affect the performance of a well-trained model on new datasets from different domain. In order to overcome this problem, we propose a domain adaption model based OD and OC segmentation called Meta enhanced Entropy-driven Adversarial Learning (MEAL). Our segmentation network consists of a meta-enhanced block (MEB) to enhance the adaptability of high-level features, and an attention-based multi-feature fusion (AMF) module for attentive integration of multi-level feature representations. For the optimization, an adversarial cost function driven by entropy map is used to improve the adaptability of the framework. Evaluations and ablation studies on two public fundus image datasets demonstrate the effectiveness of our model, and outstanding performance over other domain adaption methods in comparison.
Collapse
|
27
|
Xing F, Cornish TC, Bennett TD, Ghosh D. Bidirectional Mapping-Based Domain Adaptation for Nucleus Detection in Cross-Modality Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2880-2896. [PMID: 33284750 PMCID: PMC8543886 DOI: 10.1109/tmi.2020.3042789] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.
Collapse
|
28
|
Wang H, Zhang D, Ding S, Gao Z, Feng J, Wan S. Rib segmentation algorithm for X-ray image based on unpaired sample augmentation and multi-scale network. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06546-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
29
|
Chen H, Jiang Y, Loew M, Ko H. Unsupervised domain adaptation based COVID-19 CT infection segmentation network. APPL INTELL 2021; 52:6340-6353. [PMID: 34764618 PMCID: PMC8421243 DOI: 10.1007/s10489-021-02691-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/15/2021] [Indexed: 10/31/2022]
Abstract
Automatic segmentation of infection areas in computed tomography (CT) images has proven to be an effective diagnostic approach for COVID-19. However, due to the limited number of pixel-level annotated medical images, accurate segmentation remains a major challenge. In this paper, we propose an unsupervised domain adaptation based segmentation network to improve the segmentation performance of the infection areas in COVID-19 CT images. In particular, we propose to utilize the synthetic data and limited unlabeled real COVID-19 CT images to jointly train the segmentation network. Furthermore, we develop a novel domain adaptation module, which is used to align the two domains and effectively improve the segmentation network's generalization capability to the real domain. Besides, we propose an unsupervised adversarial training scheme, which encourages the segmentation network to learn the domain-invariant feature, so that the robust feature can be used for segmentation. Experimental results demonstrate that our method can achieve state-of-the-art segmentation performance on COVID-19 CT images.
Collapse
Affiliation(s)
- Han Chen
- School of Electrical Engineering, Korea University, Seoul, 02841 South Korea
| | - Yifan Jiang
- School of Electrical Engineering, Korea University, Seoul, 02841 South Korea
| | - Murray Loew
- Department of Biomedical Engineering, George Washington University, Washington, DC USA
| | - Hanseok Ko
- School of Electrical Engineering, Korea University, Seoul, 02841 South Korea
| |
Collapse
|
30
|
CAFR-CNN: coarse-to-fine adaptive faster R-CNN for cross-domain joint optic disc and cup segmentation. APPL INTELL 2021. [DOI: 10.1007/s10489-020-02145-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
31
|
Shiode R, Kabashima M, Hiasa Y, Oka K, Murase T, Sato Y, Otake Y. 2D-3D reconstruction of distal forearm bone from actual X-ray images of the wrist using convolutional neural networks. Sci Rep 2021; 11:15249. [PMID: 34315946 PMCID: PMC8316567 DOI: 10.1038/s41598-021-94634-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 05/06/2021] [Indexed: 01/08/2023] Open
Abstract
The purpose of the study was to develop a deep learning network for estimating and constructing highly accurate 3D bone models directly from actual X-ray images and to verify its accuracy. The data used were 173 computed tomography (CT) images and 105 actual X-ray images of a healthy wrist joint. To compensate for the small size of the dataset, digitally reconstructed radiography (DRR) images generated from CT were used as training data instead of actual X-ray images. The DRR-like images were generated from actual X-ray images in the test and adapted to the network, and high-accuracy estimation of a 3D bone model from a small data set was possible. The 3D shape of the radius and ulna were estimated from actual X-ray images with accuracies of 1.05 ± 0.36 and 1.45 ± 0.41 mm, respectively.
Collapse
Affiliation(s)
- Ryoya Shiode
- Department of Orthopaedic Surgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka, 565-0871, Japan. .,Division of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan.
| | - Mototaka Kabashima
- Division of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan
| | - Yuta Hiasa
- Division of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan
| | - Kunihiro Oka
- Department of Orthopaedic Surgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Tsuyoshi Murase
- Department of Orthopaedic Surgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan
| | - Yoshito Otake
- Division of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, 630-0192, Japan.
| |
Collapse
|
32
|
Vesal S, Gu M, Kosti R, Maier A, Ravikumar N. Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy Minimization for Multi-Modal Cardiac Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1838-1851. [PMID: 33729930 DOI: 10.1109/tmi.2021.3066683] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning models are sensitive to domain shift phenomena. A model trained on images from one domain cannot generalise well when tested on images from a different domain, despite capturing similar anatomical structures. It is mainly because the data distribution between the two domains is different. Moreover, creating annotation for every new modality is a tedious and time-consuming task, which also suffers from high inter- and intra- observer variability. Unsupervised domain adaptation (UDA) methods intend to reduce the gap between source and target domains by leveraging source domain labelled data to generate labels for the target domain. However, current state-of-the-art (SOTA) UDA methods demonstrate degraded performance when there is insufficient data in source and target domains. In this paper, we present a novel UDA method for multi-modal cardiac image segmentation. The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces. The paper introduces an end-to-end framework that integrates: a) entropy minimization, b) output feature space alignment and c) a novel point-cloud shape adaptation based on the latent features learned by the segmentation model. We validated our method on two cardiac datasets by adapting from the annotated source domain, bSSFP-MRI (balanced Steady-State Free Procession-MRI), to the unannotated target domain, LGE-MRI (Late-gadolinium enhance-MRI), for the multi-sequence dataset; and from MRI (source) to CT (target) for the cross-modality dataset. The results highlighted that by enforcing adversarial learning in different parts of the network, the proposed method delivered promising performance, compared to other SOTA methods.
Collapse
|
33
|
Lei H, Liu W, Xie H, Zhao B, Yue G, Lei B. Unsupervised Domain Adaptation Based Image Synthesis and Feature Alignment for Joint Optic Disc and Cup Segmentation. IEEE J Biomed Health Inform 2021; 26:90-102. [PMID: 34061755 DOI: 10.1109/jbhi.2021.3085770] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Due to the discrepancy of different devices for fundus image collection, a well-trained neural network is usually unsuitable for another new dataset. To solve this problem, the unsupervised domain adaptation strategy attracts a lot of attentions. In this paper, we propose an unsupervised domain adaptation method based image synthesis and feature alignment (ISFA) method to segment optic disc and cup on the fundus image. The GAN-based image synthesis (IS) mechanism along with the boundary information of optic disc and cup is utilized to generate target-like query images, which serves as the intermediate latent space between source domain and target domain images to alleviate the domain shift problem. Specifically, we use content and style feature alignment (CSFA) to ensure the feature consistency among source domain images, target-like query images and target domain images. The adversarial learning is used to extract domain invariant features for output-level feature alignment (OLFA). To enhance the representation ability of domain-invariant boundary structure information, we introduce the edge attention module (EAM) for low-level feature maps. Eventually, we train our proposed method on the training set of the REFUGE challenge dataset and test it on Drishti-GS and RIM-ONE_r3 datasets. On the Drishti-GS dataset, our method achieves about 3% improvement of Dice on optic cup segmentation over the next best method. We comprehensively discuss the robustness of our method for small dataset domain adaptation. The experimental results also demonstrate the effectiveness of our method. Our code is available at https://github.com/thinkobj/ISFA.
Collapse
|
34
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 77] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
35
|
Albahli S, Rauf HT, Algosaibi A, Balas VE. AI-driven deep CNN approach for multi-label pathology classification using chest X-Rays. PeerJ Comput Sci 2021; 7:e495. [PMID: 33977135 PMCID: PMC8064140 DOI: 10.7717/peerj-cs.495] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 03/27/2021] [Indexed: 02/05/2023]
Abstract
Artificial intelligence (AI) has played a significant role in image analysis and feature extraction, applied to detect and diagnose a wide range of chest-related diseases. Although several researchers have used current state-of-the-art approaches and have produced impressive chest-related clinical outcomes, specific techniques may not contribute many advantages if one type of disease is detected without the rest being identified. Those who tried to identify multiple chest-related diseases were ineffective due to insufficient data and the available data not being balanced. This research provides a significant contribution to the healthcare industry and the research community by proposing a synthetic data augmentation in three deep Convolutional Neural Networks (CNNs) architectures for the detection of 14 chest-related diseases. The employed models are DenseNet121, InceptionResNetV2, and ResNet152V2; after training and validation, an average ROC-AUC score of 0.80 was obtained competitive as compared to the previous models that were trained for multi-class classification to detect anomalies in x-ray images. This research illustrates how the proposed model practices state-of-the-art deep neural networks to classify 14 chest-related diseases with better accuracy.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer Science, Qassim University, Buraydah, Saudi Arabia
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, stoke on Trent, United Kingdom
| | | | - Valentina Emilia Balas
- Department of Automation and Applied Informatics, Aurel Vlaicu University of Arad, Arad, Romania
| |
Collapse
|
36
|
Liu D, Zhang D, Song Y, Zhang F, O'Donnell L, Huang H, Chen M, Cai W. PDAM: A Panoptic-Level Feature Alignment Framework for Unsupervised Domain Adaptive Instance Segmentation in Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:154-165. [PMID: 32915732 DOI: 10.1109/tmi.2020.3023466] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In this work, we present an unsupervised domain adaptation (UDA) method, named Panoptic Domain Adaptive Mask R-CNN (PDAM), for unsupervised instance segmentation in microscopy images. Since there currently lack methods particularly for UDA instance segmentation, we first design a Domain Adaptive Mask R-CNN (DAM) as the baseline, with cross-domain feature alignment at the image and instance levels. In addition to the image- and instance-level domain discrepancy, there also exists domain bias at the semantic level in the contextual information. Next, we, therefore, design a semantic segmentation branch with a domain discriminator to bridge the domain gap at the contextual level. By integrating the semantic- and instance-level feature adaptation, our method aligns the cross-domain features at the panoptic level. Third, we propose a task re-weighting mechanism to assign trade-off weights for the detection and segmentation loss functions. The task re-weighting mechanism solves the domain bias issue by alleviating the task learning for some iterations when the features contain source-specific factors. Furthermore, we design a feature similarity maximization mechanism to facilitate instance-level feature adaptation from the perspective of representational learning. Different from the typical feature alignment methods, our feature similarity maximization mechanism separates the domain-invariant and domain-specific features by enlarging their feature distribution dependency. Experimental results on three UDA instance segmentation scenarios with five datasets demonstrate the effectiveness of our proposed PDAM method, which outperforms state-of-the-art UDA methods by a large margin.
Collapse
|
37
|
IOSUDA: an unsupervised domain adaptation with input and output space alignment for joint optic disc and cup segmentation. APPL INTELL 2020. [DOI: 10.1007/s10489-020-01956-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
38
|
Zhang P, Zhong Y, Deng Y, Tang X, Li X. Drr4covid: Learning Automated COVID-19 Infection Segmentation From Digitally Reconstructed Radiographs. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:207736-207757. [PMID: 34812368 PMCID: PMC8545269 DOI: 10.1109/access.2020.3038279] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 11/10/2020] [Indexed: 05/07/2023]
Abstract
Automated infection measurement and COVID-19 diagnosis based on Chest X-ray (CXR) imaging is important for faster examination, where infection segmentation is an essential step for assessment and quantification. However, due to the heterogeneity of X-ray imaging and the difficulty of annotating infected regions precisely, learning automated infection segmentation on CXRs remains a challenging task. We propose a novel approach, called DRR4Covid, to learn COVID-19 infection segmentation on CXRs from digitally reconstructed radiographs (DRRs). DRR4Covid consists of an infection-aware DRR generator, a segmentation network, and a domain adaptation module. Given a labeled Computed Tomography scan, the infection-aware DRR generator can produce infection-aware DRRs with pixel-level annotations of infected regions for training the segmentation network. The domain adaptation module is designed to enable the segmentation network trained on DRRs to generalize to CXRs. The statistical analyses made on experiment results have indicated that our infection-aware DRRs are significantly better than standard DRRs in learning COVID-19 infection segmentation (p < 0.05) and the domain adaptation module can improve the infection segmentation performance on CXRs significantly (p < 0.05). Without using any annotations of CXRs, our network has achieved a classification score of (Accuracy: 0.949, AUC: 0.987, F1-score: 0.947) and a segmentation score of (Accuracy: 0.956, AUC: 0.980, F1-score: 0.955) on a test set with 558 normal cases and 558 positive cases. Besides, by adjusting the strength of radiological signs of COVID-19 infection in infection-aware DRRs, we estimate the detection limit of X-ray imaging in detecting COVID-19 infection. The estimated detection limit, measured by the percent volume of the lung that is infected by COVID-19, is 19.43% ± 16.29%, and the estimated lower bound of infected voxel contribution rate for significant radiological signs of COVID-19 infection is 20.0%. Our codes are made publicly available at https://github.com/PengyiZhang/DRR4Covid.
Collapse
Affiliation(s)
- Pengyi Zhang
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Yunxin Zhong
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Yulin Deng
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Xiaoying Tang
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Xiaoqiong Li
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| |
Collapse
|
39
|
Mahmood F, Borders D, Chen RJ, Mckay GN, Salimian KJ, Baras A, Durr NJ. Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3257-3267. [PMID: 31283474 PMCID: PMC8588951 DOI: 10.1109/tmi.2019.2927182] [Citation(s) in RCA: 119] [Impact Index Per Article: 29.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Nuclei mymargin segmentation is a fundamental task for various computational pathology applications including nuclei morphology analysis, cell type classification, and cancer grading. Deep learning has emerged as a powerful approach to segmenting nuclei but the accuracy of convolutional neural networks (CNNs) depends on the volume and the quality of labeled histopathology data for training. In particular, conventional CNN-based approaches lack structured prediction capabilities, which are required to distinguish overlapping and clumped nuclei. Here, we present an approach to nuclei segmentation that overcomes these challenges by utilizing a conditional generative adversarial network (cGAN) trained with synthetic and real data. We generate a large dataset of H&E training images with perfect nuclei segmentation labels using an unpaired GAN framework. This synthetic data along with real histopathology data from six different organs are used to train a conditional GAN with spectral normalization and gradient penalty for nuclei segmentation. This adversarial regression framework enforces higher-order spacial-consistency when compared to conventional CNN models. We demonstrate that this nuclei segmentation approach generalizes across different organs, sites, patients and disease states, and outperforms conventional approaches, especially in isolating individual and overlapping nuclei.
Collapse
|
40
|
Li H, Han H, Li Z, Wang L, Wu Z, Lu J, Zhou SK. High-Resolution Chest X-Ray Bone Suppression Using Unpaired CT Structural Priors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3053-3063. [PMID: 32275586 DOI: 10.1109/tmi.2020.2986242] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
There is clinical evidence that suppressing the bone structures in Chest X-rays (CXRs) improves diagnostic value, either for radiologists or computer-aided diagnosis. However, bone-free CXRs are not always accessible. We hereby propose a coarse-to-fine CXR bone suppression approach by using structural priors derived from unpaired computed tomography (CT) images. In the low-resolution stage, we use the digitally reconstructed radiograph (DRR) image that is computed from CT as a bridge to connect CT and CXR. We then perform CXR bone decomposition by leveraging the DRR bone decomposition model learned from unpaired CTs and domain adaptation between CXR and DRR. To further mitigate the domain differences between CXRs and DRRs and speed up the learning convergence, we perform all the aboved operations in Laplacian of Gaussian (LoG) domain. After obtaining the bone decomposition result in DRR, we upsample it to a high resolution, based on which the bone region in the original high-resolution CXR is cropped and processed to produce a high-resolution bone decomposition result. Finally, such a produced bone image is subtracted from the original high-resolution CXR to obtain the bone suppression result. We conduct experiments and clinical evaluations based on two benchmarking CXR databases to show that (i) the proposed method outperforms the state-of-the-art unsupervised CXR bone suppression approaches; (ii) the CXRs with bone suppression are instrumental to radiologists for reducing their false-negative rate of lung diseases from 15% to 8%; and (iii) state-of-the-art disease classification performances are achieved by learning a deep network that takes the original CXR and its bone-suppressed image as inputs.
Collapse
|
41
|
Wilson G, Cook DJ. A Survey of Unsupervised Deep Domain Adaptation. ACM T INTEL SYST TEC 2020; 11:1-46. [PMID: 34336374 PMCID: PMC8323662 DOI: 10.1145/3400066] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 05/01/2020] [Indexed: 10/23/2022]
Abstract
Deep learning has produced state-of-the-art results for a variety of tasks. While such approaches for supervised learning have performed well, they assume that training and testing data are drawn from the same distribution, which may not always be the case. As a complement to this challenge, single-source unsupervised domain adaptation can handle situations where a network is trained on labeled data from a source domain and unlabeled data from a related but different target domain with the goal of performing well at test-time on the target domain. Many single-source and typically homogeneous unsupervised deep domain adaptation approaches have thus been developed, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially-costly target data labels. This survey will compare these approaches by examining alternative methods, the unique and common elements, results, and theoretical insights. We follow this with a look at application areas and open research directions.
Collapse
|
42
|
Farhat H, Sakr GE, Kilany R. Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19. MACHINE VISION AND APPLICATIONS 2020; 31:53. [PMID: 32834523 PMCID: PMC7386599 DOI: 10.1007/s00138-020-01101-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 06/21/2020] [Accepted: 07/07/2020] [Indexed: 05/07/2023]
Abstract
Shortly after deep learning algorithms were applied to Image Analysis, and more importantly to medical imaging, their applications increased significantly to become a trend. Likewise, deep learning applications (DL) on pulmonary medical images emerged to achieve remarkable advances leading to promising clinical trials. Yet, coronavirus can be the real trigger to open the route for fast integration of DL in hospitals and medical centers. This paper reviews the development of deep learning applications in medical image analysis targeting pulmonary imaging and giving insights of contributions to COVID-19. It covers more than 160 contributions and surveys in this field, all issued between February 2017 and May 2020 inclusively, highlighting various deep learning tasks such as classification, segmentation, and detection, as well as different pulmonary pathologies like airway diseases, lung cancer, COVID-19 and other infections. It summarizes and discusses the current state-of-the-art approaches in this research domain, highlighting the challenges, especially with COVID-19 pandemic current situation.
Collapse
Affiliation(s)
- Hanan Farhat
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - George E. Sakr
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - Rima Kilany
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| |
Collapse
|
43
|
Chen C, Dou Q, Chen H, Qin J, Heng PA. Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2494-2505. [PMID: 32054572 DOI: 10.1109/tmi.2020.2972701] [Citation(s) in RCA: 132] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Unsupervised domain adaptation has increasingly gained interest in medical image computing, aiming to tackle the performance degradation of deep neural networks when being deployed to unseen data with heterogeneous characteristics. In this work, we present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA), to effectively adapt a segmentation network to an unlabeled target domain. Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives. In particular, we simultaneously transform the appearance of images across domains and enhance domain-invariance of the extracted features by leveraging adversarial learning in multiple aspects and with a deeply supervised mechanism. The feature encoder is shared between both adaptive perspectives to leverage their mutual benefits via end-to-end learning. We have extensively evaluated our method with cardiac substructure segmentation and abdominal multi-organ segmentation for bidirectional cross-modality adaptation between MRI and CT images. Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images, and outperforms the state-of-the-art domain adaptation approaches by a large margin.
Collapse
|
44
|
|
45
|
Kügler D, Sehring J, Stefanov A, Stenin I, Kristin J, Klenzner T, Schipper J, Mukhopadhyay A. i3PosNet: instrument pose estimation from X-ray in temporal bone surgery. Int J Comput Assist Radiol Surg 2020; 15:1137-1145. [PMID: 32440956 PMCID: PMC7316684 DOI: 10.1007/s11548-020-02157-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 04/03/2020] [Indexed: 11/03/2022]
Abstract
PURPOSE Accurate estimation of the position and orientation (pose) of surgical instruments is crucial for delicate minimally invasive temporal bone surgery. Current techniques lack in accuracy and/or line-of-sight constraints (conventional tracking systems) or expose the patient to prohibitive ionizing radiation (intra-operative CT). A possible solution is to capture the instrument with a c-arm at irregular intervals and recover the pose from the image. METHODS i3PosNet infers the position and orientation of instruments from images using a pose estimation network. Said framework considers localized patches and outputs pseudo-landmarks. The pose is reconstructed from pseudo-landmarks by geometric considerations. RESULTS We show i3PosNet reaches errors [Formula: see text] mm. It outperforms conventional image registration-based approaches reducing average and maximum errors by at least two thirds. i3PosNet trained on synthetic images generalizes to real X-rays without any further adaptation. CONCLUSION The translation of deep learning-based methods to surgical applications is difficult, because large representative datasets for training and testing are not available. This work empirically shows sub-millimeter pose estimation trained solely based on synthetic training data.
Collapse
Affiliation(s)
- David Kügler
- Department of Computer Science, Technischer Universität Darmstadt, Darmstadt, Germany. .,German Center for Degenerative Diseases (DZNE) e.V., Bonn, Germany.
| | - Jannik Sehring
- Department of Computer Science, Technischer Universität Darmstadt, Darmstadt, Germany
| | - Andrei Stefanov
- Department of Computer Science, Technischer Universität Darmstadt, Darmstadt, Germany
| | - Igor Stenin
- ENT Clinic, University Düsseldorf, Düsseldorf, Germany
| | - Julia Kristin
- ENT Clinic, University Düsseldorf, Düsseldorf, Germany
| | | | - Jörg Schipper
- ENT Clinic, University Düsseldorf, Düsseldorf, Germany
| | - Anirban Mukhopadhyay
- Department of Computer Science, Technischer Universität Darmstadt, Darmstadt, Germany
| |
Collapse
|
46
|
Li Y, Han G, Wu X, Li Z, Zhao K, Zhang Z, Liu Z, Liang C. Normalization of multicenter CT radiomics by a generative adversarial network method. Phys Med Biol 2020; 66. [PMID: 32209747 DOI: 10.1088/1361-6560/ab8319] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 03/25/2020] [Indexed: 12/15/2022]
Abstract
PURPOSE To reduce the variability of radiomics features caused by computed tomography (CT) imaging protocols through using a generative adversarial network (GAN) method. MATERIAL AND METHODS In this study, we defined a set of images acquired with a certain imaging protocol as a domain, and a total of 4 domains (A, B, C, and T [target]) from 3 different scanners were included. In dataset#1, 60 patinets for each domain were collected. Datasets#2 and #3 included 40 slices of spleen for each of the domains. In dataset#4, the slices of 3 colorectal cancer groups (n = 28, 38, and 32) were separately retrieved from 3 different scanners, and each group contained short-term and long-term survivors. 77 features were extracted for evaluation by comparing features distributions. First, we trained the GAN model on dataset#1 to learn how to normalize images from domains A, B, and C to T. Next, by comparing feature distributions between normalized images of the different domains, we identified the appropriate model and assessed it , in dataset #2 and dataset#3, respectively. Finally, to investigate whether our proposed method could facilitate multicenter radiomics analysis, we built the lasso classifier to distinguish short-term from long-term survivors based on a certain group in dataset#4, and validate it in another two groups, which formed a cross-validation between groups in dataset#4. RESULTS After normalization, the percentage of aligned features between domains A vs T, B vs T, and C vs T increased from 10.4 %, 18.2%, and 50.1% to 93.5%, 89.6%, and 77.9%, respectively. In the cross-validation results, average improvement of the area under the receiver operating characteristic curve achieved 11% (3%-32%). CONCLUSION Our proposed GAN-based normalization method could reduce the variability of radiomics features caused by different CT imaging protocols and facilitate multicenter radiomics analysis.
Collapse
Affiliation(s)
- Yajun Li
- South China University of Technology, Guangzhou, Guangdong, CHINA
| | - Guoqiang Han
- College of Electronic and Information Engineering, South China University of Technology, Guangzhou, CHINA
| | - Xiaomei Wu
- South China University of Technology, Guangzhou, Guangdong, CHINA
| | - Zhenhui Li
- Yunnan Cancer Hospital, Kunming, Yunnan, CHINA
| | - Ke Zhao
- South China University of Technology, Guangzhou, Guangdong, CHINA
| | | | - Zaiyi Liu
- Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, CHINA
| | - Changhong Liang
- Radiology, Guangdong General Hospital, Guangzhou, 510080, CHINA
| |
Collapse
|
47
|
Combining Multi-Sequence and Synthetic Images for Improved Segmentation of Late Gadolinium Enhancement Cardiac MRI. STATISTICAL ATLASES AND COMPUTATIONAL MODELS OF THE HEART. MULTI-SEQUENCE CMR SEGMENTATION, CRT-EPIGGY AND LV FULL QUANTIFICATION CHALLENGES 2020. [DOI: 10.1007/978-3-030-39074-7_31] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
48
|
Wang S, Yu L, Yang X, Fu CW, Heng PA. Patch-Based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2485-2495. [PMID: 30794170 DOI: 10.1109/tmi.2019.2899910] [Citation(s) in RCA: 112] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Glaucoma is a leading cause of irreversible blindness. Accurate segmentation of the optic disc (OD) and optic cup (OC) from fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks demonstrate promising progress in the joint OD and OC segmentation. However, affected by the domain shift among different datasets, deep networks are severely hindered in generalizing across different scanners and institutions. In this paper, we present a novel patch-based output space adversarial learning framework ( p OSAL) to jointly and robustly segment the OD and OC from different fundus image datasets. We first devise a lightweight and efficient segmentation network as a backbone. Considering the specific morphology of OD and OC, a novel morphology-aware segmentation loss is proposed to guide the network to generate accurate and smooth segmentation. Our p OSAL framework then exploits unsupervised domain adaptation to address the domain shift challenge by encouraging the segmentation in the target domain to be similar to the source ones. Since the whole-segmentation-based adversarial loss is insufficient to drive the network to capture segmentation details, we further design the p OSAL in a patch-based fashion to enable fine-grained discrimination on local segmentation details. We extensively evaluate our p OSAL framework and demonstrate its effectiveness in improving the segmentation performance on three public retinal fundus image datasets, i.e., Drishti-GS, RIM-ONE-r3, and REFUGE. Furthermore, our p OSAL framework achieved the first place in the OD and OC segmentation tasks in the MICCAI 2018 Retinal Fundus Glaucoma Challenge.
Collapse
|
49
|
Yang J, Dvornek NC, Zhang F, Zhuang J, Chapiro J, Lin M, Duncan JS. Domain-Agnostic Learning with Anatomy-Consistent Embedding for Cross-Modality Liver Segmentation. ... IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2019; 2019:10.1109/iccvw.2019.00043. [PMID: 34676308 PMCID: PMC8528125 DOI: 10.1109/iccvw.2019.00043] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Domain Adaptation (DA) has the potential to greatly help the generalization of deep learning models. However, the current literature usually assumes to transfer the knowledge from the source domain to a specific known target domain. Domain Agnostic Learning (DAL) proposes a new task of transferring knowledge from the source domain to data from multiple heterogeneous target domains. In this work, we propose the Domain-Agnostic Learning framework with Anatomy-Consistent Embedding (DALACE) that works on both domain-transfer and task-transfer to learn a disentangled representation, aiming to not only be invariant to different modalities but also preserve anatomical structures for the DA and DAL tasks in cross-modality liver segmentation. We validated and compared our model with state-of-the-art methods, including CycleGAN, Task Driven Generative Adversarial Network (TD-GAN), and Domain Adaptation via Disentangled Representations (DADR). For the DA task, our DALACE model outperformed CycleGAN, TD-GAN, and DADR with DSC of 0.847 compared to 0.721, 0.793 and 0.806. For the DAL task, our model improved the performance with DSC of 0.794 from 0.522, 0.719 and 0.742 by CycleGAN, TD-GAN, and DADR. Further, we visualized the success of disentanglement, which added human interpretability of the learned meaningful representations. Through ablation analysis, we specifically showed the concrete benefits of disentanglement for downstream tasks and the role of supervision for better disentangled representation with segmentation consistency to be invariant to domains with the proposed Domain-Agnostic Module (DAM) and to preserve anatomical information with the proposed Anatomy-Preserving Module (APM).
Collapse
Affiliation(s)
- Junlin Yang
- Department of Biomedical Engineering, Yale University
| | - Nicha C Dvornek
- Department of Radiology & Biomedical Imaging, Yale School of Medicine
| | - Fan Zhang
- Department of Biomedical Engineering, Yale University
| | | | - Julius Chapiro
- Department of Radiology & Biomedical Imaging, Yale School of Medicine
| | - MingDe Lin
- Department of Radiology & Biomedical Imaging, Yale School of Medicine
| | - James S Duncan
- Department of Biomedical Engineering, Yale University
- Department of Electrical Engineering, Yale University
- Department of Radiology & Biomedical Imaging, Yale School of Medicine
- Department of Statistics & Data Science, Yale University
| |
Collapse
|
50
|
Unberath M, Zaech JN, Gao C, Bier B, Goldmann F, Lee SC, Fotouhi J, Taylor R, Armand M, Navab N. Enabling machine learning in X-ray-based procedures via realistic simulation of image formation. Int J Comput Assist Radiol Surg 2019; 14:1517-1528. [PMID: 31187399 PMCID: PMC7297499 DOI: 10.1007/s11548-019-02011-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 06/03/2019] [Indexed: 12/19/2022]
Abstract
PURPOSE Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available. METHODS We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system. RESULTS Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]). CONCLUSION Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.
Collapse
Affiliation(s)
- Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA.
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.
| | - Jan-Nico Zaech
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Bastian Bier
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Florian Goldmann
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Sing Chun Lee
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Javad Fotouhi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Russell Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
| | - Nassir Navab
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|