51
|
Wildeboer RR, van Sloun RJG, Wijkstra H, Mischi M. Artificial intelligence in multiparametric prostate cancer imaging with focus on deep-learning methods. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 189:105316. [PMID: 31951873 DOI: 10.1016/j.cmpb.2020.105316] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/09/2019] [Accepted: 01/04/2020] [Indexed: 05/16/2023]
Abstract
Prostate cancer represents today the most typical example of a pathology whose diagnosis requires multiparametric imaging, a strategy where multiple imaging techniques are combined to reach an acceptable diagnostic performance. However, the reviewing, weighing and coupling of multiple images not only places additional burden on the radiologist, it also complicates the reviewing process. Prostate cancer imaging has therefore been an important target for the development of computer-aided diagnostic (CAD) tools. In this survey, we discuss the advances in CAD for prostate cancer over the last decades with special attention to the deep-learning techniques that have been designed in the last few years. Moreover, we elaborate and compare the methods employed to deliver the CAD output to the operator for further medical decision making.
Collapse
Affiliation(s)
- Rogier R Wildeboer
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands.
| | - Ruud J G van Sloun
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands.
| | - Hessel Wijkstra
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, the Netherlands
| | - Massimo Mischi
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands
| |
Collapse
|
52
|
Bardis MD, Houshyar R, Chang PD, Ushinsky A, Glavis-Bloom J, Chahine C, Bui TL, Rupasinghe M, Filippi CG, Chow DS. Applications of Artificial Intelligence to Prostate Multiparametric MRI (mpMRI): Current and Emerging Trends. Cancers (Basel) 2020; 12:E1204. [PMID: 32403240 PMCID: PMC7281682 DOI: 10.3390/cancers12051204] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/02/2020] [Accepted: 05/08/2020] [Indexed: 01/13/2023] Open
Abstract
Prostate carcinoma is one of the most prevalent cancers worldwide. Multiparametric magnetic resonance imaging (mpMRI) is a non-invasive tool that can improve prostate lesion detection, classification, and volume quantification. Machine learning (ML), a branch of artificial intelligence, can rapidly and accurately analyze mpMRI images. ML could provide better standardization and consistency in identifying prostate lesions and enhance prostate carcinoma management. This review summarizes ML applications to prostate mpMRI and focuses on prostate organ segmentation, lesion detection and segmentation, and lesion characterization. A literature search was conducted to find studies that have applied ML methods to prostate mpMRI. To date, prostate organ segmentation and volume approximation have been well executed using various ML techniques. Prostate lesion detection and segmentation are much more challenging tasks for ML and were attempted in several studies. They largely remain unsolved problems due to data scarcity and the limitations of current ML algorithms. By contrast, prostate lesion characterization has been successfully completed in several studies because of better data availability. Overall, ML is well situated to become a tool that enhances radiologists' accuracy and speed.
Collapse
Affiliation(s)
- Michelle D. Bardis
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Roozbeh Houshyar
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Peter D. Chang
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Alexander Ushinsky
- Mallinckrodt Institute of Radiology, Washington University Saint Louis, St. Louis, MO 63110, USA;
| | - Justin Glavis-Bloom
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Chantal Chahine
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Thanh-Lan Bui
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Mark Rupasinghe
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | | | - Daniel S. Chow
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| |
Collapse
|
53
|
Her EJ, Haworth A, Rowshanfarzad P, Ebert MA. Progress towards Patient-Specific, Spatially-Continuous Radiobiological Dose Prescription and Planning in Prostate Cancer IMRT: An Overview. Cancers (Basel) 2020; 12:E854. [PMID: 32244821 PMCID: PMC7226478 DOI: 10.3390/cancers12040854] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 03/12/2020] [Accepted: 03/27/2020] [Indexed: 01/30/2023] Open
Abstract
Advances in imaging have enabled the identification of prostate cancer foci with an initial application to focal dose escalation, with subvolumes created with image intensity thresholds. Through quantitative imaging techniques, correlations between image parameters and tumour characteristics have been identified. Mathematical functions are typically used to relate image parameters to prescription dose to improve the clinical relevance of the resulting dose distribution. However, these relationships have remained speculative or invalidated. In contrast, the use of radiobiological models during treatment planning optimisation, termed biological optimisation, has the advantage of directly considering the biological effect of the resulting dose distribution. This has led to an increased interest in the accurate derivation of radiobiological parameters from quantitative imaging to inform the models. This article reviews the progress in treatment planning using image-informed tumour biology, from focal dose escalation to the current trend of individualised biological treatment planning using image-derived radiobiological parameters, with the focus on prostate intensity-modulated radiotherapy (IMRT).
Collapse
Affiliation(s)
- Emily Jungmin Her
- Department of Physics, University of Western Australia, Crawley, WA 6009, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Camperdown, NSW 2050, Australia
| | - Pejman Rowshanfarzad
- Department of Physics, University of Western Australia, Crawley, WA 6009, Australia
| | - Martin A. Ebert
- Department of Physics, University of Western Australia, Crawley, WA 6009, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Nedlands, WA 6009, Australia
- 5D Clinics, Claremont, WA 6010, Australia
| |
Collapse
|
54
|
Yang X, Lin Y, Wang Z, Li X, Cheng KT. Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential Generative Adversarial Networks. IEEE J Biomed Health Inform 2020; 24:855-865. [DOI: 10.1109/jbhi.2019.2922986] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
55
|
Segmentation and visualization of left atrium through a unified deep learning framework. Int J Comput Assist Radiol Surg 2020; 15:589-600. [DOI: 10.1007/s11548-020-02128-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 02/17/2020] [Indexed: 10/24/2022]
|
56
|
Wang Z, Lin Y, Cheng KT, Yang X. Semi-supervised mp-MRI data synthesis with StitchLayer and auxiliary distance maximization. Med Image Anal 2020; 59:101565. [DOI: 10.1016/j.media.2019.101565] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 06/25/2019] [Accepted: 09/24/2019] [Indexed: 11/25/2022]
|
57
|
Gurav SB, Kulhalli KV, Desai VV. PROSTATE CANCER DETECTION USING HISTOPATHOLOGY IMAGES AND CLASSIFICATION USING IMPROVED RideNN. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2019. [DOI: 10.4015/s101623721950042x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Medical industry reports prostate cancer as common and high among men and alarms the necessity for detecting prostate cancer for which the required morphology is extracted from the histopathology images. Commonly, the Gleason grading system remains a perfect factor for grading prostate cancer in men, but pathologists suffer from minute inter- and intra-observer variations. Thus, an automatic method for segmenting and classifying prostate cancer is modeled in this paper. The significance of the developed method is that the segmentation and classification are gland-oriented using the Color Space (CS) transformation and Salp Swarm Optimization Algorithm-based Rider Neural Network (SSA-RideNN). The gland region is considered as the morphology for cancer detection from which the maximal significant regions are extracted as features using multiple-kernel scale-invariant feature transform (MK-SIFT). Here, the RideNN classifier is trained optimally using the proposed Salp–Rider Algorithm (SRA), which is the integration of Salp Swarm Optimization Algorithm (SSA) and Rider Optimization Algorithm (ROA). The experimentation is performed using the histopathology images and the analysis based on sensitivity, accuracy, and specificity reveals that the proposed prostate cancer detection method acquired the maximal accuracy, sensitivity, and specificity of 0.8966, 0.8919, and 0.8596, respectively.
Collapse
Affiliation(s)
- Shashidhar B. Gurav
- Sharad Institute of Technology, College of Engineering, Ichal Karanji, Kolhapur 416121, Maharashtra, India
| | - Kshama V. Kulhalli
- D Y Patil College of Engineering and Technology, Kasaba Bawada, Kolhapur 416006, Maharashtra, India
| | - Veena V. Desai
- Department of Computer Science and Engineering, KLS Gogte Institute of Technology, Udyambag, Belagavi 590008, Karnataka, India
| |
Collapse
|
58
|
USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.07.006] [Citation(s) in RCA: 123] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
59
|
Cao R, Mohammadian Bajgiran A, Afshari Mirak S, Shakeri S, Zhong X, Enzmann D, Raman S, Sung K. Joint Prostate Cancer Detection and Gleason Score Prediction in mp-MRI via FocalNet. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2496-2506. [PMID: 30835218 DOI: 10.1109/tmi.2019.2901928] [Citation(s) in RCA: 112] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Multi-parametric MRI (mp-MRI) is considered the best non-invasive imaging modality for diagnosing prostate cancer (PCa). However, mp-MRI for PCa diagnosis is currently limited by the qualitative or semi-quantitative interpretation criteria, leading to inter-reader variability and a suboptimal ability to assess lesion aggressiveness. Convolutional neural networks (CNNs) are a powerful method to automatically learn the discriminative features for various tasks, including cancer detection. We propose a novel multi-class CNN, FocalNet, to jointly detect PCa lesions and predict their aggressiveness using Gleason score (GS). FocalNet characterizes lesion aggressiveness and fully utilizes distinctive knowledge from mp-MRI. We collected a prostate mp-MRI dataset from 417 patients who underwent 3T mp-MRI exams prior to robotic-assisted laparoscopic prostatectomy. FocalNet was trained and evaluated in this large study cohort with fivefold cross validation. In the free-response receiver operating characteristics (FROC) analysis for lesion detection, FocalNet achieved 89.7% and 87.9% sensitivity for index lesions and clinically significant lesions at one false positive per patient, respectively. For the GS classification, evaluated by the receiver operating characteristics (ROC) analysis, FocalNet received the area under the curve of 0.81 and 0.79 for the classifications of clinically significant PCa (GS ≥ 3 + 4) and PCa with GS ≥ 4 + 3, respectively. With the comparison to the prospective performance of radiologists using the current diagnostic guideline, FocalNet demonstrated comparable detection sensitivity for index lesions and clinically significant lesions, only 3.4% and 1.5% lower than highly experienced radiologists without statistical significance.
Collapse
|
60
|
Alkadi R, Taher F, El-baz A, Werghi N. A Deep Learning-Based Approach for the Detection and Localization of Prostate Cancer in T2 Magnetic Resonance Images. J Digit Imaging 2019; 32:793-807. [PMID: 30506124 PMCID: PMC6737129 DOI: 10.1007/s10278-018-0160-1] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
We address the problem of prostate lesion detection, localization, and segmentation in T2W magnetic resonance (MR) images. We train a deep convolutional encoder-decoder architecture to simultaneously segment the prostate, its anatomical structure, and the malignant lesions. To incorporate the 3D contextual spatial information provided by the MRI series, we propose a novel 3D sliding window approach, which preserves the 2D domain complexity while exploiting 3D information. Experiments on data from 19 patients provided for the public by the Initiative for Collaborative Computer Vision Benchmarking (I2CVB) show that our approach outperforms traditional pattern recognition and machine learning approaches by a significant margin. Particularly, for the task of cancer detection and localization, the system achieves an average AUC of 0.995, an accuracy of 0.894, and a recall of 0.928. The proposed mono-modal deep learning-based system performs comparably to other multi-modal MR-based systems. It could improve the performance of a radiologist in prostate cancer diagnosis and treatment planning.
Collapse
Affiliation(s)
- Ruba Alkadi
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| | - Fatma Taher
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| | - Ayman El-baz
- University of Louisville, Louisville, KY 40292 USA
| | - Naoufel Werghi
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| |
Collapse
|
61
|
Xu H, Baxter JSH, Akin O, Cantor-Rivera D. Prostate cancer detection using residual networks. Int J Comput Assist Radiol Surg 2019; 14:1647-1650. [PMID: 30972686 PMCID: PMC7472465 DOI: 10.1007/s11548-019-01967-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Accepted: 04/03/2019] [Indexed: 11/28/2022]
Abstract
PURPOSE To automatically identify regions where prostate cancer is suspected on multi-parametric magnetic resonance images (mp-MRI). METHODS A residual network was implemented based on segmentations from an expert radiologist on T2-weighted, apparent diffusion coefficient map, and high b-value diffusion-weighted images. Mp-MRIs from 346 patients were used in this study. RESULTS The residual network achieved a hit or miss accuracy of 93% for lesion detection, with an average Jaccard score of 71% that compared the agreement between network and radiologist segmentations. CONCLUSION This paper demonstrated the ability for residual networks to learn features for prostate lesion segmentation.
Collapse
Affiliation(s)
- Helen Xu
- Ezra AI Canada, Unit 310, 545 King St. West, Toronto, Canada
| | | | - Oguz Akin
- Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | |
Collapse
|
62
|
Tang P, Wang X, Shi B, Bai X, Liu W, Tu Z. Deep FisherNet for Image Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2244-2250. [PMID: 30403638 DOI: 10.1109/tnnls.2018.2874657] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Despite the great success of convolutional neural networks (CNNs) for the image classification task on data sets such as Cifar and ImageNet, CNN's representation power is still somewhat limited in dealing with images that have a large variation in size and clutter, where Fisher vector (FV) has shown to be an effective encoding strategy. FV encodes an image by aggregating local descriptors with a universal generative Gaussian mixture model (GMM). FV, however, has limited learning capability and its parameters are mostly fixed after constructing the codebook. To combine together the best of the two worlds, we propose in this brief a neural network structure with FV layer being part of an end-to-end trainable system that is differentiable; we name our network FisherNet that is learnable using back propagation. Our proposed FisherNet combines CNN training and FV encoding in a single end-to-end structure. We observe a clear advantage of FisherNet over plain CNN and standard FV in terms of both classification accuracy and computational efficiency on the challenging PASCAL visual object classes object classification and emotion image classification tasks.
Collapse
|
63
|
Mlynarski P, Delingette H, Criminisi A, Ayache N. Deep learning with mixed supervision for brain tumor segmentation. J Med Imaging (Bellingham) 2019; 6:034002. [PMID: 31423456 PMCID: PMC6689144 DOI: 10.1117/1.jmi.6.3.034002] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 07/16/2019] [Indexed: 01/10/2023] Open
Abstract
Most of the current state-of-the-art methods for tumor segmentation are based on machine learning models trained manually on segmented images. This type of training data is particularly costly, as manual delineation of tumors is not only time-consuming but also requires medical expertise. On the other hand, images with a provided global label (indicating presence or absence of a tumor) are less informative but can be obtained at a substantially lower cost. We propose to use both types of training data (fully annotated and weakly annotated) to train a deep learning model for segmentation. The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification. The model is jointly trained for segmentation and classification tasks to exploit the information contained in weakly annotated images while preventing the network from learning features that are irrelevant for the segmentation task. We evaluate our method on the challenging task of brain tumor segmentation in magnetic resonance images from the Brain Tumor Segmentation 2018 Challenge. We show that the proposed approach provides a significant improvement in segmentation performance compared to the standard supervised learning. The observed improvement is proportional to the ratio between weakly annotated and fully annotated images available for training.
Collapse
Affiliation(s)
- Pawel Mlynarski
- Université Côte d’Azur, Inria, Epione Research Team, Sophia Antipolis, France
| | - Hervé Delingette
- Université Côte d’Azur, Inria, Epione Research Team, Sophia Antipolis, France
| | | | - Nicholas Ayache
- Université Côte d’Azur, Inria, Epione Research Team, Sophia Antipolis, France
| |
Collapse
|
64
|
Abraham B, Nair MS. Computer-aided grading of prostate cancer from MRI images using Convolutional Neural Networks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-169913] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Affiliation(s)
- Bejoy Abraham
- Department of Computer Science, University of Kerala, Kariavattom, Thiruvananthapuram 695581, Kerala, India
- Department of Computer Science and Engineering, College of Engineering Perumon, Kollam 691601, Kerala, India
| | - Madhu S. Nair
- Department of Computer Science, Cochin University of Science and Technology, Kochi 682022, Kerala, India
| |
Collapse
|
65
|
Abraham B, Nair MS. Automated grading of prostate cancer using convolutional neural network and ordinal class classifier. INFORMATICS IN MEDICINE UNLOCKED 2019. [DOI: 10.1016/j.imu.2019.100256] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
|
66
|
Zhao R, Zhang R, Tang T, Feng X, Li J, Liu Y, Zhu R, Wang G, Li K, Zhou W, Yang Y, Wang Y, Ba Y, Zhang J, Liu Y, Zhou F. TriZ-a rotation-tolerant image feature and its application in endoscope-based disease diagnosis. Comput Biol Med 2018; 99:182-190. [PMID: 29936284 DOI: 10.1016/j.compbiomed.2018.06.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 05/30/2018] [Accepted: 06/08/2018] [Indexed: 12/11/2022]
Abstract
Endoscopy is becoming one of the widely-used technologies to screen the gastric diseases, and it heavily relies on the experiences of the clinical endoscopists. The location, shape, and size are the typical patterns for the endoscopists to make the diagnosis decisions. The contrasting texture patterns also suggest the potential lesions. This study designed a novel rotation-tolerant image feature, TriZ, and demonstrated the effectiveness on both the rotation invariance and the lesion detection of three gastric lesion types, i.e., gastric polyp, gastric ulcer, and gastritis. TriZ achieved 87.0% in the four-class classification problem of the three gastric lesion types and the healthy controls, averaged over the twenty random runs of 10-fold cross-validations. Due to that biomedical imaging technologies may capture the lesion sites from different angles, the symmetric image feature extraction algorithm TriZ may facilitate the biomedical image based disease diagnosis modeling. Compared with the 378,434 features of the HOG algorithm, TriZ achieved a better accuracy using only 126 image features.
Collapse
Affiliation(s)
- Ruixue Zhao
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Ruochi Zhang
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Tongyu Tang
- First Hospital, Jilin University, Changchun, Jilin, 130012, China
| | - Xin Feng
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Jialiang Li
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yue Liu
- College of Communication Engineering, Jilin University, Changchun, Jilin, 130012, China
| | - Renxiang Zhu
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Guangze Wang
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Kangning Li
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Wenyang Zhou
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Yunfei Yang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yuzhao Wang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yuanjie Ba
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Jiaojiao Zhang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yang Liu
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Fengfeng Zhou
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China.
| |
Collapse
|