1
|
Shiri I, Amini M, Yousefirizi F, Vafaei Sadr A, Hajianfar G, Salimi Y, Mansouri Z, Jenabi E, Maghsudi M, Mainta I, Becker M, Rahmim A, Zaidi H. Information fusion for fully automated segmentation of head and neck tumors from PET and CT images. Med Phys 2024; 51:319-333. [PMID: 37475591 DOI: 10.1002/mp.16615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/16/2023] [Accepted: 06/19/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PURPOSE Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. METHODS The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. RESULTS In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak , SUVmean and SUVmedian . CONCLUSION PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, USA
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Minerva Becker
- Service of Radiology, Geneva University Hospital, Geneva, Switzerland
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
- Department of Radiology and Physics, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
2
|
Bhure U, Cieciera M, Lehnick D, Del Sol Pérez Lago M, Grünig H, Lima T, Roos JE, Strobel K. Incorporation of CAD (computer-aided detection) with thin-slice lung CT in routine 18F-FDG PET/CT imaging read-out protocol for detection of lung nodules. Eur J Hybrid Imaging 2023; 7:17. [PMID: 37718372 PMCID: PMC10505603 DOI: 10.1186/s41824-023-00177-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 08/29/2023] [Indexed: 09/19/2023] Open
Abstract
OBJECTIVE To evaluate the detection rate and performance of 18F-FDG PET alone (PET), the combination of PET and low-dose thick-slice CT (PET/lCT), PET and diagnostic thin-slice CT (PET/dCT), and additional computer-aided detection (PET/dCT/CAD) for lung nodules (LN)/metastases in tumor patients. Along with this, assessment of inter-reader agreement and time requirement for different techniques were evaluated as well. METHODS In 100 tumor patients (56 male, 44 female; age range: 22-93 years, mean age: 60 years) 18F-FDG PET images, low-dose CT with shallow breathing (5 mm slice thickness), and diagnostic thin-slice CT (1 mm slice thickness) in full inspiration were retrospectively evaluated by three readers with variable experience (junior, mid-level, and senior) for the presence of lung nodules/metastases and additionally analyzed with CAD. Time taken for each analysis and number of the nodules detected were assessed. Sensitivity, specificity, positive and negative predictive value, accuracy, and Receiver operating characteristic (ROC) analysis of each technique was calculated. Histopathology and/or imaging follow-up served as reference standard for the diagnosis of metastases. RESULTS Three readers, on an average, detected 40 LN in 17 patients with PET only, 121 LN in 37 patients using ICT, 283 LN in 60 patients with dCT, and 282 LN in 53 patients with CAD. On average, CAD detected 49 extra LN, missed by the three readers without CAD, whereas CAD overall missed 53 LN. There was very good inter-reader agreement regarding the diagnosis of metastases for all four techniques (kappa: 0.84-0.93). The average time required for the evaluation of LN in PET, lCT, dCT, and CAD was 25, 31, 60, and 40 s, respectively; the assistance of CAD lead to average 33% reduction in time requirement for evaluation of lung nodules compared to dCT. The time-saving effect was highest in the less experienced reader. Regarding the diagnosis of metastases, sensitivity and specificity combined of all readers were 47.8%/96.2% for PET, 80.0%/81.9% for PET/lCT, 100%/56.7% for PET/dCT, and 95.6%/64.3% for PET/CAD. No significant difference was observed regarding the ROC AUC (area under the curve) between the imaging methods. CONCLUSION Implementation of CAD for the detection of lung nodules/metastases in routine 18F-FDG PET/CT read-out is feasible. The combination of diagnostic thin-slice CT and CAD significantly increases the detection rate of lung nodules in tumor patients compared to the standard PET/CT read-out. PET combined with low-dose CT showed the best balance between sensitivity and specificity regarding the diagnosis of metastases per patient. CAD reduces the time required for lung nodule/metastasis detection, especially for less experienced readers.
Collapse
Affiliation(s)
- Ujwal Bhure
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland
| | - Matthäus Cieciera
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland
| | - Dirk Lehnick
- Faculty of Health Sciences and Medicine, University of Lucerne, Frohburgstrasse 3, 6002, Lucerne, Switzerland
- Clinical Trial Unit Central Switzerland, University of Lucerne, 6002, Lucerne, Switzerland
| | | | - Hannes Grünig
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland
| | - Thiago Lima
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland
| | - Justus E Roos
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland
| | - Klaus Strobel
- Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, Lucerne, Switzerland.
- Division of Nuclear Medicine, Department of Nuclear Medicine and Radiology, Cantonal Hospital Lucerne, 6000, Lucerne 16, Switzerland.
| |
Collapse
|
3
|
|
4
|
Huang J, Mou T, O’Regan K, O’Sullivan F. Spatial Auto-Regressive Analysis of Correlation in 3-D PET With Application to Model-Based Simulation of Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:964-974. [PMID: 31478845 PMCID: PMC7241306 DOI: 10.1109/tmi.2019.2938411] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
When a scanner is installed and begins to be used operationally, its actual performance may deviate somewhat from the predictions made at the design stage. Thus it is recommended that routine quality assurance (QA) measurements be used to provide an operational understanding of scanning properties. While QA data are primarily used to evaluate sensitivity and bias patterns, there is a possibility to also make use of such data sets for a more refined understanding of the 3-D scanning properties. Building on some recent work on analysis of the distributional characteristics of iteratively reconstructed PET data, we construct an auto-regression model for analysis of the 3-D spatial auto-covariance structure of iteratively reconstructed data, after normalization. Appropriate likelihood-based statistical techniques for estimation of the auto-regression model coefficients are described. The fitted model leads to a simple process for approximate simulation of scanner performance-one that is readily implemented in an R script. The analysis provides a practical mechanism for evaluating the operational error characteristics of iteratively reconstructed PET images. Simulation studies are used for validation. The approach is illustrated on QA data from an operational clinical scanner and numerical phantom data. We also demonstrate the potential for use of these techniques, as a form of model-based bootstrapping, to provide assessments of measurement uncertainties in variables derived from clinical FDG-PET scans. This is illustrated using data from a clinical scan in a lung cancer patient, after a 3-minute acquisition has been re-binned into three consecutive 1-minute time-frames. An uncertainty measure for the tumor SUVmax value is obtained. The methodology is seen to be practical and could be a useful support for quantitative decision making based on PET data.
Collapse
Affiliation(s)
- Jian Huang
- Department of Statistics, University College Cork, Ireland
| | - Tian Mou
- Department of Statistics, University College Cork, Ireland
| | - Kevin O’Regan
- Department of Radiology, Cork University Hospital, Ireland
| | | |
Collapse
|
5
|
Kumar A, Fulham M, Feng D, Kim J. Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 39:204-217. [PMID: 31217099 DOI: 10.1109/tmi.2019.2923601] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FS), multi-branch (MB) techniques, and multichannel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0:05) than the fusion baselines (FS: 99.00%, MB: 99.08%, TC: 98.92%) and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.
Collapse
|
6
|
Wang Y, Zhou L, Yu B, Wang L, Zu C, Lalush DS, Lin W, Wu X, Zhou J, Shen D. 3D Auto-Context-Based Locality Adaptive Multi-Modality GANs for PET Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1328-1339. [PMID: 30507527 PMCID: PMC6541547 DOI: 10.1109/tmi.2018.2884053] [Citation(s) in RCA: 81] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we propose a 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1 ×1 ×1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-context LA-GANs model to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.
Collapse
Affiliation(s)
- Yan Wang
- School of Computer Science, Sichuan University, China
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia
| | - Biting Yu
- School of Computing and Information Technology, University of Wollongong, Australia
| | - Lei Wang
- School of Computing and Information Technology, University of Wollongong, Australia
| | - Chen Zu
- School of Computing and Information Technology, University of Wollongong, Australia
| | - David S. Lalush
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, USA
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, China
| | - Jiliu Zhou
- School of Computer Science, Chengdu University of Information Technology, China
| | - Dinggang Shen
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| |
Collapse
|
7
|
Automatic localization of normal active organs in 3D PET scans. Comput Med Imaging Graph 2018; 70:111-118. [DOI: 10.1016/j.compmedimag.2018.09.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2018] [Revised: 06/12/2018] [Accepted: 09/26/2018] [Indexed: 11/19/2022]
|
8
|
Du J, Li W, Xiao B. Fusion of anatomical and functional images using parallel saliency features. Inf Sci (N Y) 2018. [DOI: 10.1016/j.ins.2017.12.008] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
9
|
|
10
|
Teramoto A, Fujita H, Yamamuro O, Tamaki T. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique. Med Phys 2016; 43:2821-2827. [DOI: 10.1118/1.4948498] [Citation(s) in RCA: 156] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
|
11
|
Zhang F, Song Y, Cai W, Liu S, Liu S, Pujol S, Kikinis R, Xia Y, Fulham MJ, Feng DD, Alzheimers Disease Neuroimaging Initiative. Pairwise Latent Semantic Association for Similarity Computation in Medical Imaging. IEEE Trans Biomed Eng 2015; 63:1058-1069. [PMID: 26372117 DOI: 10.1109/tbme.2015.2478028] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Retrieving medical images that present similar diseases is an active research area for diagnostics and therapy. However, it can be problematic given the visual variations between anatomical structures. In this paper, we propose a new feature extraction method for similarity computation in medical imaging. Instead of the low-level visual appearance, we design a CCA-PairLDA feature representation method to capture the similarity between images with high-level semantics. First, we extract the PairLDA topics to represent an image as a mixture of latent semantic topics in an image pair context. Second, we generate a CCA-correlation model to represent the semantic association between an image pair for similarity computation. While PairLDA adjusts the latent topics for all image pairs, CCA-correlation helps to associate an individual image pair. In this way, the semantic descriptions of an image pair are closely correlated, and naturally correspond to similarity computation between images. We evaluated our method on two public medical imaging datasets for image retrieval and showed improved performance.
Collapse
Affiliation(s)
- Fan Zhang
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney, Sydney, N.S.W., Australia
| | - Yang Song
- Biomedical and BMIT Research Group, School of Information Technologies, University of Sydney
| | - Weidong Cai
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney
| | - Sidong Liu
- Biomedical and BMIT Research Group, School of Information Technologies, University of Sydney
| | - Siqi Liu
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney
| | - Sonia Pujol
- Surgical Planning Lab, Brigham & Women's Hospital, Harvard Medical School
| | - Ron Kikinis
- Surgical Planning Lab, Brigham & Women's Hospital, Harvard Medical School
| | - Yong Xia
- Shaanxi Key Lab of Speech and Image Information Processing, School of Computer Science and Technology, Northwestern Polytechnical University
| | - Michael J Fulham
- Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital
| | - David Dagan Feng
- BMIT Research Group, School of Information Technologies, University of Sydney
| | | |
Collapse
|
12
|
Song Y, Cai W, Huang H, Zhou Y, Feng DD, Fulham MJ, Chen M. Large Margin Local Estimate With Applications to Medical Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1362-1377. [PMID: 25616009 DOI: 10.1109/tmi.2015.2393954] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Medical images usually exhibit large intra-class variation and inter-class ambiguity in the feature space, which could affect classification accuracy. To tackle this issue, we propose a new Large Margin Local Estimate (LMLE) classification model with sub-categorization based sparse representation. We first sub-categorize the reference sets of different classes into multiple clusters, to reduce feature variation within each subcategory compared to the entire reference set. Local estimates are generated for the test image using sparse representation with reference subcategories as the dictionaries. The similarity between the test image and each class is then computed by fusing the distances with the local estimates in a learning-based large margin aggregation construct to alleviate the problem of inter-class ambiguity. The derived similarities are finally used to determine the class label. We demonstrate that our LMLE model is generally applicable to different imaging modalities, and applied it to three tasks: interstitial lung disease (ILD) classification on high-resolution computed tomography (HRCT) images, phenotype binary classification and continuous regression on brain magnetic resonance (MR) imaging. Our experimental results show statistically significant performance improvements over existing popular classifiers.
Collapse
|
13
|
Song Y, Cai W, Huang H, Zhou Y, Wang Y, Feng DD. Locality-constrained Subcluster Representation Ensemble for lung image classification. Med Image Anal 2015; 22:102-13. [PMID: 25839422 DOI: 10.1016/j.media.2015.03.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2014] [Revised: 03/06/2015] [Accepted: 03/13/2015] [Indexed: 11/30/2022]
Abstract
In this paper, we propose a new Locality-constrained Subcluster Representation Ensemble (LSRE) model, to classify high-resolution computed tomography (HRCT) images of interstitial lung diseases (ILDs). Medical images normally exhibit large intra-class variation and inter-class ambiguity in the feature space. Modelling of feature space separation between different classes is thus problematic and this affects the classification performance. Our LSRE model tackles this issue in an ensemble classification construct. The image set is first partitioned into subclusters based on spectral clustering with approximation-based affinity matrix. Basis representations of the test image are then generated with sparse approximation from the subclusters. These basis representations are finally fused with approximation- and distribution-based weights to classify the test image. Our experimental results on a large HRCT database show good performance improvement over existing popular classifiers.
Collapse
Affiliation(s)
- Yang Song
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of IT, University of Sydney, NSW 2006, Australia.
| | - Weidong Cai
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of IT, University of Sydney, NSW 2006, Australia
| | - Heng Huang
- Department of Computer Science and Engineering, University of Texas, Arlington, TX 76019, USA
| | - Yun Zhou
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Yue Wang
- Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, Arlington, VA 22203, USA
| | - David Dagan Feng
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of IT, University of Sydney, NSW 2006, Australia
| |
Collapse
|
14
|
|