51
|
Yang X, Lin Y, Wang Z, Li X, Cheng KT. Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential Generative Adversarial Networks. IEEE J Biomed Health Inform 2020; 24:855-865. [DOI: 10.1109/jbhi.2019.2922986] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
52
|
Segmentation and visualization of left atrium through a unified deep learning framework. Int J Comput Assist Radiol Surg 2020; 15:589-600. [DOI: 10.1007/s11548-020-02128-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 02/17/2020] [Indexed: 10/24/2022]
|
53
|
Wang Z, Lin Y, Cheng KT, Yang X. Semi-supervised mp-MRI data synthesis with StitchLayer and auxiliary distance maximization. Med Image Anal 2020; 59:101565. [DOI: 10.1016/j.media.2019.101565] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 06/25/2019] [Accepted: 09/24/2019] [Indexed: 11/25/2022]
|
54
|
Gurav SB, Kulhalli KV, Desai VV. PROSTATE CANCER DETECTION USING HISTOPATHOLOGY IMAGES AND CLASSIFICATION USING IMPROVED RideNN. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2019. [DOI: 10.4015/s101623721950042x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Medical industry reports prostate cancer as common and high among men and alarms the necessity for detecting prostate cancer for which the required morphology is extracted from the histopathology images. Commonly, the Gleason grading system remains a perfect factor for grading prostate cancer in men, but pathologists suffer from minute inter- and intra-observer variations. Thus, an automatic method for segmenting and classifying prostate cancer is modeled in this paper. The significance of the developed method is that the segmentation and classification are gland-oriented using the Color Space (CS) transformation and Salp Swarm Optimization Algorithm-based Rider Neural Network (SSA-RideNN). The gland region is considered as the morphology for cancer detection from which the maximal significant regions are extracted as features using multiple-kernel scale-invariant feature transform (MK-SIFT). Here, the RideNN classifier is trained optimally using the proposed Salp–Rider Algorithm (SRA), which is the integration of Salp Swarm Optimization Algorithm (SSA) and Rider Optimization Algorithm (ROA). The experimentation is performed using the histopathology images and the analysis based on sensitivity, accuracy, and specificity reveals that the proposed prostate cancer detection method acquired the maximal accuracy, sensitivity, and specificity of 0.8966, 0.8919, and 0.8596, respectively.
Collapse
Affiliation(s)
- Shashidhar B. Gurav
- Sharad Institute of Technology, College of Engineering, Ichal Karanji, Kolhapur 416121, Maharashtra, India
| | - Kshama V. Kulhalli
- D Y Patil College of Engineering and Technology, Kasaba Bawada, Kolhapur 416006, Maharashtra, India
| | - Veena V. Desai
- Department of Computer Science and Engineering, KLS Gogte Institute of Technology, Udyambag, Belagavi 590008, Karnataka, India
| |
Collapse
|
55
|
USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.07.006] [Citation(s) in RCA: 123] [Impact Index Per Article: 24.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
56
|
Cao R, Mohammadian Bajgiran A, Afshari Mirak S, Shakeri S, Zhong X, Enzmann D, Raman S, Sung K. Joint Prostate Cancer Detection and Gleason Score Prediction in mp-MRI via FocalNet. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2496-2506. [PMID: 30835218 DOI: 10.1109/tmi.2019.2901928] [Citation(s) in RCA: 105] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Multi-parametric MRI (mp-MRI) is considered the best non-invasive imaging modality for diagnosing prostate cancer (PCa). However, mp-MRI for PCa diagnosis is currently limited by the qualitative or semi-quantitative interpretation criteria, leading to inter-reader variability and a suboptimal ability to assess lesion aggressiveness. Convolutional neural networks (CNNs) are a powerful method to automatically learn the discriminative features for various tasks, including cancer detection. We propose a novel multi-class CNN, FocalNet, to jointly detect PCa lesions and predict their aggressiveness using Gleason score (GS). FocalNet characterizes lesion aggressiveness and fully utilizes distinctive knowledge from mp-MRI. We collected a prostate mp-MRI dataset from 417 patients who underwent 3T mp-MRI exams prior to robotic-assisted laparoscopic prostatectomy. FocalNet was trained and evaluated in this large study cohort with fivefold cross validation. In the free-response receiver operating characteristics (FROC) analysis for lesion detection, FocalNet achieved 89.7% and 87.9% sensitivity for index lesions and clinically significant lesions at one false positive per patient, respectively. For the GS classification, evaluated by the receiver operating characteristics (ROC) analysis, FocalNet received the area under the curve of 0.81 and 0.79 for the classifications of clinically significant PCa (GS ≥ 3 + 4) and PCa with GS ≥ 4 + 3, respectively. With the comparison to the prospective performance of radiologists using the current diagnostic guideline, FocalNet demonstrated comparable detection sensitivity for index lesions and clinically significant lesions, only 3.4% and 1.5% lower than highly experienced radiologists without statistical significance.
Collapse
|
57
|
Alkadi R, Taher F, El-baz A, Werghi N. A Deep Learning-Based Approach for the Detection and Localization of Prostate Cancer in T2 Magnetic Resonance Images. J Digit Imaging 2019; 32:793-807. [PMID: 30506124 PMCID: PMC6737129 DOI: 10.1007/s10278-018-0160-1] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
We address the problem of prostate lesion detection, localization, and segmentation in T2W magnetic resonance (MR) images. We train a deep convolutional encoder-decoder architecture to simultaneously segment the prostate, its anatomical structure, and the malignant lesions. To incorporate the 3D contextual spatial information provided by the MRI series, we propose a novel 3D sliding window approach, which preserves the 2D domain complexity while exploiting 3D information. Experiments on data from 19 patients provided for the public by the Initiative for Collaborative Computer Vision Benchmarking (I2CVB) show that our approach outperforms traditional pattern recognition and machine learning approaches by a significant margin. Particularly, for the task of cancer detection and localization, the system achieves an average AUC of 0.995, an accuracy of 0.894, and a recall of 0.928. The proposed mono-modal deep learning-based system performs comparably to other multi-modal MR-based systems. It could improve the performance of a radiologist in prostate cancer diagnosis and treatment planning.
Collapse
Affiliation(s)
- Ruba Alkadi
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| | - Fatma Taher
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| | - Ayman El-baz
- University of Louisville, Louisville, KY 40292 USA
| | - Naoufel Werghi
- Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, United Arab Emirates
| |
Collapse
|
58
|
Xu H, Baxter JSH, Akin O, Cantor-Rivera D. Prostate cancer detection using residual networks. Int J Comput Assist Radiol Surg 2019; 14:1647-1650. [PMID: 30972686 PMCID: PMC7472465 DOI: 10.1007/s11548-019-01967-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Accepted: 04/03/2019] [Indexed: 11/28/2022]
Abstract
PURPOSE To automatically identify regions where prostate cancer is suspected on multi-parametric magnetic resonance images (mp-MRI). METHODS A residual network was implemented based on segmentations from an expert radiologist on T2-weighted, apparent diffusion coefficient map, and high b-value diffusion-weighted images. Mp-MRIs from 346 patients were used in this study. RESULTS The residual network achieved a hit or miss accuracy of 93% for lesion detection, with an average Jaccard score of 71% that compared the agreement between network and radiologist segmentations. CONCLUSION This paper demonstrated the ability for residual networks to learn features for prostate lesion segmentation.
Collapse
Affiliation(s)
- Helen Xu
- Ezra AI Canada, Unit 310, 545 King St. West, Toronto, Canada
| | | | - Oguz Akin
- Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | |
Collapse
|
59
|
Tang P, Wang X, Shi B, Bai X, Liu W, Tu Z. Deep FisherNet for Image Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2244-2250. [PMID: 30403638 DOI: 10.1109/tnnls.2018.2874657] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Despite the great success of convolutional neural networks (CNNs) for the image classification task on data sets such as Cifar and ImageNet, CNN's representation power is still somewhat limited in dealing with images that have a large variation in size and clutter, where Fisher vector (FV) has shown to be an effective encoding strategy. FV encodes an image by aggregating local descriptors with a universal generative Gaussian mixture model (GMM). FV, however, has limited learning capability and its parameters are mostly fixed after constructing the codebook. To combine together the best of the two worlds, we propose in this brief a neural network structure with FV layer being part of an end-to-end trainable system that is differentiable; we name our network FisherNet that is learnable using back propagation. Our proposed FisherNet combines CNN training and FV encoding in a single end-to-end structure. We observe a clear advantage of FisherNet over plain CNN and standard FV in terms of both classification accuracy and computational efficiency on the challenging PASCAL visual object classes object classification and emotion image classification tasks.
Collapse
|
60
|
Mlynarski P, Delingette H, Criminisi A, Ayache N. Deep learning with mixed supervision for brain tumor segmentation. J Med Imaging (Bellingham) 2019; 6:034002. [PMID: 31423456 PMCID: PMC6689144 DOI: 10.1117/1.jmi.6.3.034002] [Citation(s) in RCA: 60] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 07/16/2019] [Indexed: 01/10/2023] Open
Abstract
Most of the current state-of-the-art methods for tumor segmentation are based on machine learning models trained manually on segmented images. This type of training data is particularly costly, as manual delineation of tumors is not only time-consuming but also requires medical expertise. On the other hand, images with a provided global label (indicating presence or absence of a tumor) are less informative but can be obtained at a substantially lower cost. We propose to use both types of training data (fully annotated and weakly annotated) to train a deep learning model for segmentation. The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification. The model is jointly trained for segmentation and classification tasks to exploit the information contained in weakly annotated images while preventing the network from learning features that are irrelevant for the segmentation task. We evaluate our method on the challenging task of brain tumor segmentation in magnetic resonance images from the Brain Tumor Segmentation 2018 Challenge. We show that the proposed approach provides a significant improvement in segmentation performance compared to the standard supervised learning. The observed improvement is proportional to the ratio between weakly annotated and fully annotated images available for training.
Collapse
Affiliation(s)
- Pawel Mlynarski
- Université Côte d’Azur, Inria, Epione Research Team, Sophia Antipolis, France
| | - Hervé Delingette
- Université Côte d’Azur, Inria, Epione Research Team, Sophia Antipolis, France
| | | | - Nicholas Ayache
- Université Côte d’Azur, Inria, Epione Research Team, Sophia Antipolis, France
| |
Collapse
|
61
|
Abraham B, Nair MS. Computer-aided grading of prostate cancer from MRI images using Convolutional Neural Networks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-169913] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Affiliation(s)
- Bejoy Abraham
- Department of Computer Science, University of Kerala, Kariavattom, Thiruvananthapuram 695581, Kerala, India
- Department of Computer Science and Engineering, College of Engineering Perumon, Kollam 691601, Kerala, India
| | - Madhu S. Nair
- Department of Computer Science, Cochin University of Science and Technology, Kochi 682022, Kerala, India
| |
Collapse
|
62
|
Abraham B, Nair MS. Automated grading of prostate cancer using convolutional neural network and ordinal class classifier. INFORMATICS IN MEDICINE UNLOCKED 2019. [DOI: 10.1016/j.imu.2019.100256] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
|
63
|
Zhao R, Zhang R, Tang T, Feng X, Li J, Liu Y, Zhu R, Wang G, Li K, Zhou W, Yang Y, Wang Y, Ba Y, Zhang J, Liu Y, Zhou F. TriZ-a rotation-tolerant image feature and its application in endoscope-based disease diagnosis. Comput Biol Med 2018; 99:182-190. [PMID: 29936284 DOI: 10.1016/j.compbiomed.2018.06.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 05/30/2018] [Accepted: 06/08/2018] [Indexed: 12/11/2022]
Abstract
Endoscopy is becoming one of the widely-used technologies to screen the gastric diseases, and it heavily relies on the experiences of the clinical endoscopists. The location, shape, and size are the typical patterns for the endoscopists to make the diagnosis decisions. The contrasting texture patterns also suggest the potential lesions. This study designed a novel rotation-tolerant image feature, TriZ, and demonstrated the effectiveness on both the rotation invariance and the lesion detection of three gastric lesion types, i.e., gastric polyp, gastric ulcer, and gastritis. TriZ achieved 87.0% in the four-class classification problem of the three gastric lesion types and the healthy controls, averaged over the twenty random runs of 10-fold cross-validations. Due to that biomedical imaging technologies may capture the lesion sites from different angles, the symmetric image feature extraction algorithm TriZ may facilitate the biomedical image based disease diagnosis modeling. Compared with the 378,434 features of the HOG algorithm, TriZ achieved a better accuracy using only 126 image features.
Collapse
Affiliation(s)
- Ruixue Zhao
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Ruochi Zhang
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Tongyu Tang
- First Hospital, Jilin University, Changchun, Jilin, 130012, China
| | - Xin Feng
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Jialiang Li
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yue Liu
- College of Communication Engineering, Jilin University, Changchun, Jilin, 130012, China
| | - Renxiang Zhu
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Guangze Wang
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Kangning Li
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Wenyang Zhou
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Yunfei Yang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yuzhao Wang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yuanjie Ba
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Jiaojiao Zhang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yang Liu
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Fengfeng Zhou
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China.
| |
Collapse
|