1
|
Badesha AS, Frood R, Bailey MA, Coughlin PM, Scarsbrook AF. A Scoping Review of Machine-Learning Derived Radiomic Analysis of CT and PET Imaging to Investigate Atherosclerotic Cardiovascular Disease. Tomography 2024; 10:1455-1487. [PMID: 39330754 PMCID: PMC11435603 DOI: 10.3390/tomography10090108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Revised: 08/27/2024] [Accepted: 08/30/2024] [Indexed: 09/28/2024] Open
Abstract
BACKGROUND Cardiovascular disease affects the carotid arteries, coronary arteries, aorta and the peripheral arteries. Radiomics involves the extraction of quantitative data from imaging features that are imperceptible to the eye. Radiomics analysis in cardiovascular disease has largely focused on CT and MRI modalities. This scoping review aims to summarise the existing literature on radiomic analysis techniques in cardiovascular disease. METHODS MEDLINE and Embase databases were searched for eligible studies evaluating radiomic techniques in living human subjects derived from CT, MRI or PET imaging investigating atherosclerotic disease. Data on study population, imaging characteristics and radiomics methodology were extracted. RESULTS Twenty-nine studies consisting of 5753 patients (3752 males) were identified, and 78.7% of patients were from coronary artery studies. Twenty-seven studies employed CT imaging (19 CT carotid angiography and 6 CT coronary angiography (CTCA)), and two studies studied PET/CT. Manual segmentation was most frequently undertaken. Processing techniques included voxel discretisation, voxel resampling and filtration. Various shape, first-order, second-order and higher-order radiomic features were extracted. Logistic regression was most commonly used for machine learning. CONCLUSION Most published evidence was feasibility/proof of concept work. There was significant heterogeneity in image acquisition, segmentation techniques, processing and analysis between studies. There is a need for the implementation of standardised imaging acquisition protocols, adherence to published reporting guidelines and economic evaluation.
Collapse
Affiliation(s)
- Arshpreet Singh Badesha
- Department of Radiology, St. James's University Hospital, Leeds Teaching Hospitals NHS Trust, Leeds LS9 7TF, UK
| | - Russell Frood
- Department of Radiology, St. James's University Hospital, Leeds Teaching Hospitals NHS Trust, Leeds LS9 7TF, UK
- Faculty of Medicine and Health, University of Leeds, Leeds LS2 9TJ, UK
| | - Marc A Bailey
- Faculty of Medicine and Health, University of Leeds, Leeds LS2 9TJ, UK
- The Leeds Vascular Institute, Leeds General Infirmary, Leeds Teaching Hospitals NHS Trust, Leeds LS1 3EX, UK
| | - Patrick M Coughlin
- The Leeds Vascular Institute, Leeds General Infirmary, Leeds Teaching Hospitals NHS Trust, Leeds LS1 3EX, UK
| | - Andrew F Scarsbrook
- Department of Radiology, St. James's University Hospital, Leeds Teaching Hospitals NHS Trust, Leeds LS9 7TF, UK
- Faculty of Medicine and Health, University of Leeds, Leeds LS2 9TJ, UK
| |
Collapse
|
2
|
Zangpo D, Uehara K, Kondo K, Kato M, Yoshimiya M, Nakatome M, Iino M. Estimating age at death by Hausdorff distance analyses of the fourth lumbar vertebral bodies using 3D postmortem CT images. Forensic Sci Med Pathol 2024; 20:472-479. [PMID: 37058209 DOI: 10.1007/s12024-023-00620-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/24/2023] [Indexed: 04/15/2023]
Abstract
The existing methods for determining adult age from human skeletons are mostly qualitative. However, a shift in quantifying age-related skeletal morphology on a quantitative scale is emerging. This study describes an intuitive variable extraction technique and quantifies skeletal morphology in continuous data to understand their aging pattern. A total of 200 postmortem CT images from the deceased aged 25-99 years (130 males, 70 females) who underwent forensic death investigations were used in the study. The 3D volume of the fourth lumbar vertebral body was segmented, smoothed, and post-processed using the open-source software ITK-SNAP and MeshLab, respectively. To measure the extent of 3D shape deformity due to aging, the Hausdorff distance (HD) analysis was performed. In our context, the maximum Hausdorff distance (maxHD) was chosen as a metric, which was subsequently studied for its correlation with age at death. A strong statistically significant positive correlation (P < 0.001) between maxHD and age at death was observed in both sexes (Spearman's rho = 0.742, male; Spearman's rho = 0.729, female). In simple linear regression analyses, the regression equations obtained yielded the standard error of estimates of 12.5 years and 13.1 years for males and females, respectively. Our study demonstrated that age-related vertebral morphology could be described using the HD method. Moreover, it encourages further studies with larger sample sizes and on other population backgrounds to validate the methodology.
Collapse
Affiliation(s)
- Dawa Zangpo
- Division of Forensic Medicine, Graduate School of Medicine, Tottori University, 86 Nishi-Cho, Yonago, 683-8503, Japan.
- Department of Forensic Medicine and Toxicology, Jigme Dorji Wangchuck National Referral Hospital, 11001, Thimphu, Bhutan.
| | - Kazutake Uehara
- Department of Mechanical Engineering, National Institute of Technology, Yonago College, Yonago, 683-8502, Japan
| | - Katsuya Kondo
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Tottori University, Tottori, 680-8552, Japan
| | - Momone Kato
- Division of Forensic Medicine, Graduate School of Medicine, Tottori University, 86 Nishi-Cho, Yonago, 683-8503, Japan
| | - Motoo Yoshimiya
- Division of Forensic Medicine, Graduate School of Medicine, Tottori University, 86 Nishi-Cho, Yonago, 683-8503, Japan
| | - Masato Nakatome
- Division of Forensic Medicine, Graduate School of Medicine, Tottori University, 86 Nishi-Cho, Yonago, 683-8503, Japan
| | - Morio Iino
- Division of Forensic Medicine, Graduate School of Medicine, Tottori University, 86 Nishi-Cho, Yonago, 683-8503, Japan
| |
Collapse
|
3
|
Najem E, Marin T, Zhuo Y, Lahoud RM, Tian F, Beddok A, Rozenblum L, Xing F, Moteabbed M, Lim R, Liu X, Woo J, Lostetter SJ, Lamane A, Chen YLE, Ma C, El Fakhri G. The role of 18F-FDG PET in minimizing variability in gross tumor volume delineation of soft tissue sarcomas. Radiother Oncol 2024; 194:110186. [PMID: 38412906 PMCID: PMC11042980 DOI: 10.1016/j.radonc.2024.110186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 02/21/2024] [Accepted: 02/23/2024] [Indexed: 02/29/2024]
Abstract
BACKGROUND Accurate gross tumor volume (GTV) delineation is a critical step in radiation therapy treatment planning. However, it is reader dependent and thus susceptible to intra- and inter-reader variability. GTV delineation of soft tissue sarcoma (STS) often relies on CT and MR images. PURPOSE This study investigates the potential role of 18F-FDG PET in reducing intra- and inter-reader variability thereby improving reproducibility of GTV delineation in STS, without incurring additional costs or radiation exposure. MATERIALS AND METHODS Three readers performed independent GTV delineation of 61 patients with STS using first CT and MR followed by CT, MR, and 18F-FDG PET images. Each reader performed a total of six delineation trials, three trials per imaging modality group. Dice Similarity Coefficient (DSC) score and Hausdorff distance (HD) were used to assess both intra- and inter-reader variability using generated simultaneous truth and performance level estimation (STAPLE) GTVs as ground truth. Statistical analysis was performed using a Wilcoxon signed-ranked test. RESULTS There was a statistically significant decrease in both intra- and inter-reader variability in GTV delineation using CT, MR 18F-FDG PET images vs. CT and MR images. This was translated by an increase in the DSC score and a decrease in the HD for GTVs drawn from CT, MR and 18F-FDG PET images vs. GTVs drawn from CT and MR for all readers and across all three trials. CONCLUSION Incorporation of 18F-FDG PET into CT and MR images decreased intra- and inter-reader variability and subsequently increased reproducibility of GTV delineation in STS.
Collapse
Affiliation(s)
- Elie Najem
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA
| | - Thibault Marin
- Yale PET Center, Dept. of Radiology and Biomedical Imaging, Yale University, 801 Howard Avenue, New Haven, CT 06520, USA
| | - Yue Zhuo
- Yale PET Center, Dept. of Radiology and Biomedical Imaging, Yale University, 801 Howard Avenue, New Haven, CT 06520, USA
| | - Rita Maria Lahoud
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA
| | - Fei Tian
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA
| | - Arnaud Beddok
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA
| | - Laura Rozenblum
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA
| | - Fangxu Xing
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA
| | - Maryam Moteabbed
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA; Radiation Oncology Department, Massachusetts General Hospital, 55 Fruit St., Boston, MA 02114, USA
| | - Ruth Lim
- Yale PET Center, Dept. of Radiology and Biomedical Imaging, Yale University, 801 Howard Avenue, New Haven, CT 06520, USA
| | - Xiaofeng Liu
- Yale PET Center, Dept. of Radiology and Biomedical Imaging, Yale University, 801 Howard Avenue, New Haven, CT 06520, USA
| | - Jonghye Woo
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA
| | - Stephen John Lostetter
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA
| | - Abdallah Lamane
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA
| | - Yen-Lin Evelyn Chen
- Gordon Center for Medical Imaging, Radiology Department, Massachusetts General Hospital - Harvard Medical School, 125 Nashua St., 25 Shattuck St., Boston, MA 02114, USA; Radiation Oncology Department, Massachusetts General Hospital, 55 Fruit St., Boston, MA 02114, USA
| | - Chao Ma
- Yale PET Center, Dept. of Radiology and Biomedical Imaging, Yale University, 801 Howard Avenue, New Haven, CT 06520, USA
| | - Georges El Fakhri
- Yale PET Center, Dept. of Radiology and Biomedical Imaging, Yale University, 801 Howard Avenue, New Haven, CT 06520, USA.
| |
Collapse
|
4
|
Lyuksemburg V, Abou-Hanna J, Marshall JS, Bramlet MT, Waltz AL, Pieta Keller SM, Dwyer A, Orcutt ST. Virtual Reality for Preoperative Planning in Complex Surgical Oncology: A Single-Center Experience. J Surg Res 2023; 291:546-556. [PMID: 37540972 DOI: 10.1016/j.jss.2023.07.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 06/28/2023] [Accepted: 07/02/2023] [Indexed: 08/06/2023]
Abstract
INTRODUCTION Virtual reality models (VRM) are three-dimensional (3D) simulations of two-dimensional (2D) images, creating a more accurate mental representation of patient-specific anatomy. METHODS Patients were retrospectively identified who underwent complex oncologic resections whose operations differed from preoperative plans between April 2018 and April 2019. Virtual reality modeling was performed based on preoperative 2D images to assess feasibility of use of this technology to create models. Preoperative plans made based upon 2D imaging versus VRM were compared to the final operations performed. Once the use of VRM to create preoperative plans was deemed feasible, individuals undergoing complex oncologic resections whose operative plans were difficult to define preoperatively were enrolled prospectively from July 2019 to December 2021. Preoperative plans made based upon 2D imaging and VRM by both the operating surgeon and a consulting surgeon were compared to the operation performed. Confidence in each operative plan was also measured. RESULTS Twenty patients were identified, seven retrospective and 13 prospective, with tumors of the liver, pancreas, retroperitoneum, stomach, and soft tissue. Retrospectively, VRM were unable to be created in one patient due to a poor quality 2D image; the remainder (86%) were successfully able to be created and examined. Virtual reality modeling more clearly defined the extent of resection in 50% of successful cases. Prospectively, all VRM were successfully performed. The concordance of the operative plan with VRM was higher than with 2D imaging (92% versus 54% for the operating surgeon and 69% versus 23% for the consulting surgeon). Confidence in the operative plan after VRM compared to 2D imaging also increased for both surgeons (by 15% and 8% for the operating and consulting surgeons, respectively). CONCLUSIONS Virtual reality modeling is feasible and may improve preoperative planning compared to 2D imaging. Further investigation is warranted.
Collapse
Affiliation(s)
- Vadim Lyuksemburg
- Department of Surgery, University of Illinois College Medicine at Peoria, Peoria, Illinois
| | - Jameil Abou-Hanna
- Department of Surgery, University of Illinois College Medicine at Peoria, Peoria, Illinois
| | - J Stephen Marshall
- Department of Surgery, University of Illinois College Medicine at Peoria, Peoria, Illinois
| | - Matthew T Bramlet
- Department of Pediatrics, University of Illinois College of Medicine at Peoria, Peoria, Illinois
| | - Alexa L Waltz
- Jump Trading Simulation & Education Center, OSF HealthCare, Peoria, Illinois
| | | | - Anthony Dwyer
- Department of Surgery, University of Illinois College Medicine at Peoria, Peoria, Illinois
| | - Sonia T Orcutt
- Department of Surgery, University of Illinois College Medicine at Peoria, Peoria, Illinois.
| |
Collapse
|
5
|
Wahid KA, Lin D, Sahin O, Cislo M, Nelms BE, He R, Naser MA, Duke S, Sherer MV, Christodouleas JP, Mohamed ASR, Murphy JD, Fuller CD, Gillespie EF. Large scale crowdsourced radiotherapy segmentations across a variety of cancer anatomic sites. Sci Data 2023; 10:161. [PMID: 36949088 PMCID: PMC10033824 DOI: 10.1038/s41597-023-02062-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 03/10/2023] [Indexed: 03/24/2023] Open
Abstract
Clinician generated segmentation of tumor and healthy tissue regions of interest (ROIs) on medical images is crucial for radiotherapy. However, interobserver segmentation variability has long been considered a significant detriment to the implementation of high-quality and consistent radiotherapy dose delivery. This has prompted the increasing development of automated segmentation approaches. However, extant segmentation datasets typically only provide segmentations generated by a limited number of annotators with varying, and often unspecified, levels of expertise. In this data descriptor, numerous clinician annotators manually generated segmentations for ROIs on computed tomography images across a variety of cancer sites (breast, sarcoma, head and neck, gynecologic, gastrointestinal; one patient per cancer site) for the Contouring Collaborative for Consensus in Radiation Oncology challenge. In total, over 200 annotators (experts and non-experts) contributed using a standardized annotation platform (ProKnow). Subsequently, we converted Digital Imaging and Communications in Medicine data into Neuroimaging Informatics Technology Initiative format with standardized nomenclature for ease of use. In addition, we generated consensus segmentations for experts and non-experts using the Simultaneous Truth and Performance Level Estimation method. These standardized, structured, and easily accessible data are a valuable resource for systematically studying variability in segmentation applications.
Collapse
Affiliation(s)
- Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Diana Lin
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Onur Sahin
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Michael Cislo
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Mohammed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Simon Duke
- Department of Radiation Oncology, Cambridge University Hospitals, Cambridge, UK
| | - Michael V Sherer
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA, USA
| | - John P Christodouleas
- Department of Radiation Oncology, The University of Pennsylvania Cancer Center, Philadelphia, PA, USA
- Elekta, Atlanta, GA, USA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - James D Murphy
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
| | - Erin F Gillespie
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
- Fred Hutchinson Cancer Center, Seattle, WA, USA.
| |
Collapse
|
6
|
Zhu Y, Chen L, Lu W, Gong Y, Wang X. The application of the nnU-Net-based automatic segmentation model in assisting carotid artery stenosis and carotid atherosclerotic plaque evaluation. Front Physiol 2022; 13:1057800. [PMID: 36561211 PMCID: PMC9763590 DOI: 10.3389/fphys.2022.1057800] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022] Open
Abstract
Objective: No new U-net (nnU-Net) is a newly-developed deep learning neural network, whose advantages in medical image segmentation have been noticed recently. This study aimed to investigate the value of the nnU-Net-based model for computed tomography angiography (CTA) imaging in assisting the evaluation of carotid artery stenosis (CAS) and atherosclerotic plaque. Methods: This study retrospectively enrolled 93 CAS-suspected patients who underwent head and neck CTA examination, then randomly divided them into the training set (N = 70) and the validation set (N = 23) in a 3:1 ratio. The radiologist-marked images in the training set were used for the development of the nnU-Net model, which was subsequently tested in the validation set. Results: In the training set, the nnU-Net had already displayed a good performance for CAS diagnosis and atherosclerotic plaque segmentation. Then, its utility was further confirmed in the validation set: the Dice similarity coefficient value of the nnU-Net model in segmenting background, blood vessels, calcification plaques, and dark spots reached 0.975, 0.974 0.795, and 0.498, accordingly. Besides, the nnU-Net model displayed a good consistency with physicians in assessing CAS (Kappa = 0.893), stenosis degree (Kappa = 0.930), the number of calcification plaque (Kappa = 0.922), non-calcification (Kappa = 0.768) and mixed plaque (Kappa = 0.793), as well as the max thickness of calcification plaque (intraclass correlation coefficient = 0.972). Additionally, the evaluation time of the nnU-Net model was shortened compared with the physicians (27.3 ± 4.4 s vs. 296.8 ± 81.1 s, p < 0.001). Conclusion: The automatic segmentation model based on nnU-Net shows good accuracy, reliability, and efficiency in assisting CTA to evaluate CAS and carotid atherosclerotic plaques.
Collapse
Affiliation(s)
- Ying Zhu
- First Clinical Medical College, Soochow University, Suzhou, China
| | - Liwei Chen
- Department of Radiology, School of Medicine, Tongren Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Wenjie Lu
- Department of Radiology, School of Medicine, Tongren Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yongjun Gong
- Department of Radiology, School of Medicine, Tongren Hospital, Shanghai Jiao Tong University, Shanghai, China,*Correspondence: Yongjun Gong, ; Ximing Wang,
| | - Ximing Wang
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China,*Correspondence: Yongjun Gong, ; Ximing Wang,
| |
Collapse
|
7
|
Diao Z, Jiang H, Han XH, Yao YD, Shi T. EFNet: evidence fusion network for tumor segmentation from PET-CT volumes. Phys Med Biol 2021; 66. [PMID: 34555816 DOI: 10.1088/1361-6560/ac299a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/23/2021] [Indexed: 11/11/2022]
Abstract
Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved.
Collapse
Affiliation(s)
- Zhaoshuo Diao
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| | - Xian-Hua Han
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi-shi 7538511, Japan
| | - Yu-Dong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken NJ 07030, United States of America
| | - Tianyu Shi
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|