751
|
Gardezi SJS, Elazab A, Lei B, Wang T. Breast Cancer Detection and Diagnosis Using Mammographic Data: Systematic Review. J Med Internet Res 2019; 21:e14464. [PMID: 31350843 PMCID: PMC6688437 DOI: 10.2196/14464] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 06/11/2019] [Accepted: 06/12/2019] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND Machine learning (ML) has become a vital part of medical imaging research. ML methods have evolved over the years from manual seeded inputs to automatic initializations. The advancements in the field of ML have led to more intelligent and self-reliant computer-aided diagnosis (CAD) systems, as the learning ability of ML methods has been constantly improving. More and more automated methods are emerging with deep feature learning and representations. Recent advancements of ML with deeper and extensive representation approaches, commonly known as deep learning (DL) approaches, have made a very significant impact on improving the diagnostics capabilities of the CAD systems. OBJECTIVE This review aimed to survey both traditional ML and DL literature with particular application for breast cancer diagnosis. The review also provided a brief insight into some well-known DL networks. METHODS In this paper, we present an overview of ML and DL techniques with particular application for breast cancer. Specifically, we search the PubMed, Google Scholar, MEDLINE, ScienceDirect, Springer, and Web of Science databases and retrieve the studies in DL for the past 5 years that have used multiview mammogram datasets. RESULTS The analysis of traditional ML reveals the limited usage of the methods, whereas the DL methods have great potential for implementation in clinical analysis and improve the diagnostic capability of existing CAD systems. CONCLUSIONS From the literature, it can be found that heterogeneous breast densities make masses more challenging to detect and classify compared with calcifications. The traditional ML methods present confined approaches limited to either particular density type or datasets. Although the DL methods show promising improvements in breast cancer diagnosis, there are still issues of data scarcity and computational cost, which have been overcome to a significant extent by applying data augmentation and improved computational power of DL algorithms.
Collapse
Affiliation(s)
- Syed Jamal Safdar Gardezi
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong, Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Ahmed Elazab
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong, Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong, Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong, Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| |
Collapse
|
752
|
Peng H, Dong D, Fang MJ, Li L, Tang LL, Chen L, Li WF, Mao YP, Fan W, Liu LZ, Tian L, Lin AH, Sun Y, Tian J, Ma J. Prognostic Value of Deep Learning PET/CT-Based Radiomics: Potential Role for Future Individual Induction Chemotherapy in Advanced Nasopharyngeal Carcinoma. Clin Cancer Res 2019; 25:4271-4279. [PMID: 30975664 DOI: 10.1158/1078-0432.ccr-18-3065] [Citation(s) in RCA: 204] [Impact Index Per Article: 40.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 02/28/2019] [Accepted: 04/08/2019] [Indexed: 11/16/2022]
Abstract
PURPOSE We aimed to evaluate the value of deep learning on positron emission tomography with computed tomography (PET/CT)-based radiomics for individual induction chemotherapy (IC) in advanced nasopharyngeal carcinoma (NPC). EXPERIMENTAL DESIGN We constructed radiomics signatures and nomogram for predicting disease-free survival (DFS) based on the extracted features from PET and CT images in a training set (n = 470), and then validated it on a test set (n = 237). Harrell's concordance indices (C-index) and time-independent receiver operating characteristic (ROC) analysis were applied to evaluate the discriminatory ability of radiomics nomogram, and compare radiomics signatures with plasma Epstein-Barr virus (EBV) DNA. RESULTS A total of 18 features were selected to construct CT-based and PET-based signatures, which were significantly associated with DFS (P < 0.001). Using these signatures, we proposed a radiomics nomogram with a C-index of 0.754 [95% confidence interval (95% CI), 0.709-0.800] in the training set and 0.722 (95% CI, 0.652-0.792) in the test set. Consequently, 206 (29.1%) patients were stratified as high-risk group and the other 501 (70.9%) as low-risk group by the radiomics nomogram, and the corresponding 5-year DFS rates were 50.1% and 87.6%, respectively (P < 0.0001). High-risk patients could benefit from IC while the low-risk could not. Moreover, radiomics nomogram performed significantly better than the EBV DNA-based model (C-index: 0.754 vs. 0.675 in the training set and 0.722 vs. 0.671 in the test set) in risk stratification and guiding IC. CONCLUSIONS Deep learning PET/CT-based radiomics could serve as a reliable and powerful tool for prognosis prediction and may act as a potential indicator for individual IC in advanced NPC.
Collapse
Affiliation(s)
- Hao Peng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China
- University of Chinese Academy of Sciences, Beijing, P. R. China
| | - Meng-Jie Fang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China
- University of Chinese Academy of Sciences, Beijing, P. R. China
| | - Lu Li
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, P. R. China
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, P. R. China
| | - Ling-Long Tang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Lei Chen
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Wen-Fei Li
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Yan-Ping Mao
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, P. R. China
| | - Li-Zhi Liu
- Imaging Diagnosis and Interventional Center, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, P. R. China
| | - Li Tian
- Imaging Diagnosis and Interventional Center, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, P. R. China
| | - Ai-Hua Lin
- Department of Medical Statistics and Epidemiology, School of Public Health, Sun Yat-sen University, P. R. China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China.
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine, Beihang University, Beijing, P. R. China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, P. R. China
| | - Jun Ma
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in Southern China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong, P. R. China.
| |
Collapse
|
753
|
Liu F, Yadav P, Baschnagel AM, McMillan AB. MR-based treatment planning in radiation therapy using a deep learning approach. J Appl Clin Med Phys 2019; 20:105-114. [PMID: 30861275 PMCID: PMC6414148 DOI: 10.1002/acm2.12554] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Revised: 01/21/2019] [Accepted: 02/04/2019] [Indexed: 01/03/2023] Open
Abstract
Purpose To develop and evaluate the feasibility of deep learning approaches for MR‐based treatment planning (deepMTP) in brain tumor radiation therapy. Methods and materials A treatment planning pipeline was constructed using a deep learning approach to generate continuously valued pseudo CT images from MR images. A deep convolutional neural network was designed to identify tissue features in volumetric head MR images training with co‐registered kVCT images. A set of 40 retrospective 3D T1‐weighted head images was utilized to train the model, and evaluated in 10 clinical cases with brain metastases by comparing treatment plans using deep learning generated pseudo CT and using an acquired planning kVCT. Paired‐sample Wilcoxon signed rank sum tests were used for statistical analysis to compare dosimetric parameters of plans made with pseudo CT images generated from deepMTP to those made with kVCT‐based clinical treatment plan (CTTP). Results deepMTP provides an accurate pseudo CT with Dice coefficients for air: 0.95 ± 0.01, soft tissue: 0.94 ± 0.02, and bone: 0.85 ± 0.02 and a mean absolute error of 75 ± 23 HU compared with acquired kVCTs. The absolute percentage differences of dosimetric parameters between deepMTP and CTTP was 0.24% ± 0.46% for planning target volume (PTV) volume, 1.39% ± 1.31% for maximum dose and 0.27% ± 0.79% for the PTV receiving 95% of the prescribed dose (V95). Furthermore, no significant difference was found for PTV volume (P = 0.50), the maximum dose (P = 0.83) and V95 (P = 0.19) between deepMTP and CTTP. Conclusions We have developed an automated approach (deepMTP) that allows generation of a continuously valued pseudo CT from a single high‐resolution 3D MR image and evaluated it in partial brain tumor treatment planning. The deepMTP provided dose distribution with no significant difference relative to a kVCT‐based standard volumetric modulated arc therapy plans.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Poonam Yadav
- Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Andrew M Baschnagel
- Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Alan B McMillan
- Department of Radiology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| |
Collapse
|
754
|
Brain tumor classification for MR images using transfer learning and fine-tuning. Comput Med Imaging Graph 2019; 75:34-46. [DOI: 10.1016/j.compmedimag.2019.05.001] [Citation(s) in RCA: 195] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Revised: 02/09/2019] [Accepted: 05/13/2019] [Indexed: 01/19/2023]
|
755
|
Thakur A, Thapar D, Rajan P, Nigam A. Deep metric learning for bioacoustic classification: Overcoming training data scarcity using dynamic triplet loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:534. [PMID: 31370640 DOI: 10.1121/1.5118245] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 06/28/2019] [Indexed: 06/10/2023]
Abstract
Bioacoustic classification often suffers from the lack of labeled data. This hinders the effective utilization of state-of-the-art deep learning models in bioacoustics. To overcome this problem, the authors propose a deep metric learning-based framework that provides effective classification, even when only a small number of per-class training examples are available. The proposed framework utilizes a multiscale convolutional neural network and the proposed dynamic variant of the triplet loss to learn a transformation space where intra-class separation is minimized and inter-class separation is maximized by a dynamically increasing margin. The process of learning this transformation is known as deep metric learning. The triplet loss analyzes three examples (referred to as a triplet) at a time to perform deep metric learning. The number of possible triplets increases cubically with the dataset size, making triplet loss more suitable than the cross-entropy loss in data-scarce conditions. Experiments on three different publicly available datasets show that the proposed framework performs better than existing bioacoustic classification methods. Experimental results also demonstrate the superiority of dynamic triplet loss over cross-entropy loss in data-scarce conditions. Furthermore, unlike existing bioacoustic classification methods, the proposed framework has been extended to provide open-set classification.
Collapse
Affiliation(s)
- Anshul Thakur
- School of Computing and Electrical Engineering, IIT Mandi, Mandi, Himachal Pradesh-175005, India
| | - Daksh Thapar
- School of Computing and Electrical Engineering, IIT Mandi, Mandi, Himachal Pradesh-175005, India
| | - Padmanabhan Rajan
- School of Computing and Electrical Engineering, IIT Mandi, Mandi, Himachal Pradesh-175005, India
| | - Aditya Nigam
- School of Computing and Electrical Engineering, IIT Mandi, Mandi, Himachal Pradesh-175005, India
| |
Collapse
|
756
|
Shahid AH, Singh M. Computational intelligence techniques for medical diagnosis and prognosis: Problems and current developments. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.05.010] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
757
|
Agarwal R, Diaz O, Lladó X, Yap MH, Martí R. Automatic mass detection in mammograms using deep convolutional neural networks. J Med Imaging (Bellingham) 2019; 6:031409. [PMID: 35834317 PMCID: PMC6381602 DOI: 10.1117/1.jmi.6.3.031409] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Accepted: 01/18/2019] [Indexed: 08/29/2023] Open
Abstract
With recent advances in the field of deep learning, the use of convolutional neural networks (CNNs) in medical imaging has become very encouraging. The aim of our paper is to propose a patch-based CNN method for automated mass detection in full-field digital mammograms (FFDM). In addition to evaluating CNNs pretrained with the ImageNet dataset, we investigate the use of transfer learning for a particular domain adaptation. First, the CNN is trained using a large public database of digitized mammograms (CBIS-DDSM dataset), and then the model is transferred and tested onto the smaller database of digital mammograms (INbreast dataset). We evaluate three widely used CNNs (VGG16, ResNet50, InceptionV3) and show that the InceptionV3 obtains the best performance for classifying the mass and nonmass breast region for CBIS-DDSM. We further show the benefit of domain adaptation between the CBIS-DDSM (digitized) and INbreast (digital) datasets using the InceptionV3 CNN. Mass detection evaluation follows a fivefold cross-validation strategy using free-response operating characteristic curves. Results show that the transfer learning from CBIS-DDSM obtains a substantially higher performance with the best true positive rate (TPR) of 0.98 ± 0.02 at 1.67 false positives per image (FPI), compared with transfer learning from ImageNet with TPR of 0.91 ± 0.07 at 2.1 FPI. In addition, the proposed framework improves upon mass detection results described in the literature on the INbreast database, in terms of both TPR and FPI.
Collapse
Affiliation(s)
- Richa Agarwal
- University of Girona, VICOROB, Computer Vision and Robotics Institute, Girona, Spain
| | - Oliver Diaz
- University of Girona, VICOROB, Computer Vision and Robotics Institute, Girona, Spain
| | - Xavier Lladó
- University of Girona, VICOROB, Computer Vision and Robotics Institute, Girona, Spain
| | - Moi Hoon Yap
- Manchester Metropolitan University, School of Computing, Mathematics and Digital Technology, Manchester, United Kingdom
| | - Robert Martí
- University of Girona, VICOROB, Computer Vision and Robotics Institute, Girona, Spain
| |
Collapse
|
758
|
Sanders JW, Fletcher JR, Frank SJ, Liu HL, Johnson JM, Zhou Z, Chen HSM, Venkatesan AM, Kudchadker RJ, Pagel MD, Ma J. Deep learning application engine (DLAE): Development and integration of deep learning algorithms in medical imaging. SOFTWAREX 2019; 10:100347. [PMID: 34113706 PMCID: PMC8188855 DOI: 10.1016/j.softx.2019.100347] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Herein we introduce a deep learning (DL) application engine (DLAE) system concept, present potential uses of it, and describe pathways for its integration in clinical workflows. An open-source software application was developed to provide a code-free approach to DL for medical imaging applications. DLAE supports several DL techniques used in medical imaging, including convolutional neural networks, fully convolutional networks, generative adversarial networks, and bounding box detectors. Several example applications using clinical images were developed and tested to demonstrate the capabilities of DLAE. Additionally, a model deployment example was demonstrated in which DLAE was used to integrate two trained models into a commercial clinical software package.
Collapse
Affiliation(s)
- Jeremiah W. Sanders
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| | - Justin R. Fletcher
- Odyssey Systems Consulting, LLC, 550 Lipoa Parkway, Kihei, Maui, HI, United States of America
| | - Steven J. Frank
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1422, Houston, TX 77030, United States of America
| | - Ho-Ling Liu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| | - Jason M. Johnson
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1473, Houston, TX 77030, United States of America
| | - Zijian Zhou
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
| | - Henry Szu-Meng Chen
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
| | - Aradhana M. Venkatesan
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1473, Houston, TX 77030, United States of America
| | - Rajat J. Kudchadker
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1420, Houston, TX 77030, United States of America
| | - Mark D. Pagel
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Cancer Systems Imaging, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1907, Houston, TX 77030, United States of America
| | - Jingfei Ma
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| |
Collapse
|
759
|
Cui L, Feng J, Yang L. Towards Fine Whole-Slide Skeletal Muscle Image Segmentation through Deep Hierarchically Connected Networks. JOURNAL OF HEALTHCARE ENGINEERING 2019; 2019:5191630. [PMID: 31346401 PMCID: PMC6620852 DOI: 10.1155/2019/5191630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Accepted: 03/14/2019] [Indexed: 11/28/2022]
Abstract
Automatic skeletal muscle image segmentation (MIS) is crucial in the diagnosis of muscle-related diseases. However, accurate methods often suffer from expensive computations, which are not scalable to large-scale, whole-slide muscle images. In this paper, we present a fast and accurate method to enable the more clinically meaningful whole-slide MIS. Leveraging on recently popular convolutional neural network (CNN), we train our network in an end-to-end manner so as to directly perform pixelwise classification. Our deep network is comprised of the encoder and decoder modules. The encoder module captures rich and hierarchical representations through a series of convolutional and max-pooling layers. Then, the multiple decoders utilize multilevel representations to perform multiscale predictions. The multiscale predictions are then combined together to generate a more robust dense segmentation as the network output. The decoder modules have independent loss function, which are jointly trained with a weighted loss function to address fine-grained pixelwise prediction. We also propose a two-stage transfer learning strategy to effectively train such deep network. Sufficient experiments on a challenging muscle image dataset demonstrate the significantly improved efficiency and accuracy of our method compared with recent state of the arts.
Collapse
Affiliation(s)
- Lei Cui
- Department of Information Science and Technology, Northwest University, Xi'an, China
| | - Jun Feng
- Department of Information Science and Technology, Northwest University, Xi'an, China
| | - Lin Yang
- The College of Life Sciences, Northwest University, Xi'an, China
| |
Collapse
|
760
|
Huang Y, Meng S, Zhao P, Li C. Wood quality of Chinese zither panel based on convolutional neural network and near-infrared spectroscopy. APPLIED OPTICS 2019; 58:5122-5127. [PMID: 31503833 DOI: 10.1364/ao.58.005122] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Accepted: 05/31/2019] [Indexed: 06/10/2023]
Abstract
Currently, the wood grade used for Chinese zither panels is mainly manually determined. This method discriminates slowly and is subject to subjective influences, which cannot meet the requirements of mass production in the musical instrument market. This paper proposes a method by combining a convolutional neural network (CNN) and near-infrared spectroscopy to determine wood quality. First, the Savitzky-Golay second derivatization method is used to denoise raw data. Then kernel principal component analysis is used to reduce the dimensionality of spectral data. Then the obtained variables are sent to the proposed one-dimensional CNN model. The model introduces L2 regularization and the multi-channel convolution kernel strategy. The model is then determined by seeking the optimal convolution kernel size. Finally, the test samples are sent to the proposed CNN model to verify the performance of the model. The correct classification accuracy of the test set is 93.9%. Our model has a strong learning ability and a high robustness. The result shows that the proposed method can effectively identify different grades of Chinese zither panel wood.
Collapse
|
761
|
Kumar A, Fulham M, Feng D, Kim J. Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 39:204-217. [PMID: 31217099 DOI: 10.1109/tmi.2019.2923601] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FS), multi-branch (MB) techniques, and multichannel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0:05) than the fusion baselines (FS: 99.00%, MB: 99.08%, TC: 98.92%) and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.
Collapse
|
762
|
Self-supervised iterative refinement learning for macular OCT volumetric data classification. Comput Biol Med 2019; 111:103327. [PMID: 31302456 DOI: 10.1016/j.compbiomed.2019.103327] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2019] [Revised: 06/12/2019] [Accepted: 06/12/2019] [Indexed: 11/23/2022]
Abstract
We present self-supervised iterative refinement learning (SIRL) as a pipeline to improve a type of macular optical coherence tomography (OCT) volumetric image classification algorithms. In this type of algorithms, first, two-dimensional (2D) image classification algorithms are applied to each B-scan in an OCT volume, and then B-scan level classification results are combined to obtain the classification result of the volume. Specifically, SIRL consists of repetitive training-sieving-relabeling steps. In the initialization stage, the label of each 2D image is assigned as the label of the volume they belong to, yielding an initial label set. In the training stage, the network is trained using the current label set. In the sieving and relabeling stage, the label of each 2D image is renewed based on the classification result of the trained network, and a new label set is obtained. Experiments are conducted on a clinical dataset and public dataset, on which the performances of the models trained by a normal scheme and our proposed methods are compared under a five-fold cross validation. Our proposed method achieves sensitivity, specificity, and accuracy of 89.74%, 94.87%, and 93.18%, respectively, on the clinical dataset. On the public dataset, the results of the corresponding three metrics are 98.22%, 90.43% and 95.88%. The results demonstrate the effectiveness of our proposed method as an approach to improve the B-scan-classification-based macular OCT volumetric image classification algorithms.
Collapse
|
763
|
Region-Based Automated Localization of Colonoscopy and Wireless Capsule Endoscopy Polyps. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9122404] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
The early detection of polyps could help prevent colorectal cancer. The automated detection of polyps on the colon walls could reduce the number of false negatives that occur due to manual examination errors or polyps being hidden behind folds, and could also help doctors locate polyps from screening tests such as colonoscopy and wireless capsule endoscopy. Losing polyps may result in lesions evolving badly. In this paper, we propose a modified region-based convolutional neural network (R-CNN) by generating masks around polyps detected from still frames. The locations of the polyps in the image are marked, which assists the doctors examining the polyps. The features from the polyp images are extracted using pre-trained Resnet-50 and Resnet-101 models through feature extraction and fine-tuning techniques. Various publicly available polyp datasets are analyzed with various pertained weights. It is interesting to notice that fine-tuning with balloon data (polyp-like natural images) improved the polyp detection rate. The optimum CNN models on colonoscopy datasets including CVC-ColonDB, CVC-PolypHD, and ETIS-Larib produced values (F1 score, F2 score) of (90.73, 91.27), (80.65, 79.11), and (76.43, 78.70) respectively. The best model on the wireless capsule endoscopy dataset gave a performance of (96.67, 96.10). The experimental results indicate the better localization of polyps compared to recent traditional and deep learning methods.
Collapse
|
764
|
Maqsood M, Nazir F, Khan U, Aadil F, Jamal H, Mehmood I, Song OY. Transfer Learning Assisted Classification and Detection of Alzheimer's Disease Stages Using 3D MRI Scans. SENSORS 2019; 19:s19112645. [PMID: 31212698 PMCID: PMC6603745 DOI: 10.3390/s19112645] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 06/03/2019] [Accepted: 06/06/2019] [Indexed: 02/04/2023]
Abstract
Alzheimer’s disease effects human brain cells and results in dementia. The gradual deterioration of the brain cells results in disability of performing daily routine tasks. The treatment for this disease is still not mature enough. However, its early diagnosis may allow restraining the spread of disease. For early detection of Alzheimer’s through brain Magnetic Resonance Imaging (MRI), an automated detection and classification system needs to be developed that can detect and classify the subject having dementia. These systems also need not only to classify dementia patients but to also identify the four progressing stages of dementia. The proposed system works on an efficient technique of utilizing transfer learning to classify the images by fine-tuning a pre-trained convolutional network, AlexNet. The architecture is trained and tested over the pre-processed segmented (Grey Matter, White Matter, and Cerebral Spinal Fluid) and un-segmented images for both binary and multi-class classification. The performance of the proposed system is evaluated over Open Access Series of Imaging Studies (OASIS) dataset. The algorithm showed promising results by giving the best overall accuracy of 92.85% for multi-class classification of un-segmented images.
Collapse
Affiliation(s)
- Muazzam Maqsood
- Department of Computer Science, COMSATS University Islamabad, Attock Campus, Attock 43600, Pakistan.
| | - Faria Nazir
- Department of Computer Science, Capital University of Science and Technology, Islamabad 45750, Pakistan.
| | - Umair Khan
- Department of Computer Science, COMSATS University Islamabad, Attock Campus, Attock 43600, Pakistan.
| | - Farhan Aadil
- Department of Computer Science, COMSATS University Islamabad, Attock Campus, Attock 43600, Pakistan.
| | - Habibullah Jamal
- Faculty of Engineering Sciences, Ghulam Ishaq Khan Institute, Topi 23460, Pakistan.
| | - Irfan Mehmood
- Department of Media Design and Technology, Faculty of Engineering & Informatics, University of Bradford; Bradford BD7 1DP, UK.
| | - Oh-Young Song
- Department of Software, Sejong University, Seoul 05006, Korea.
| |
Collapse
|
765
|
Pei Z, Cao S, Lu L, Chen W. Direct Cellularity Estimation on Breast Cancer Histopathology Images Using Transfer Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:3041250. [PMID: 31281408 PMCID: PMC6590493 DOI: 10.1155/2019/3041250] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2019] [Accepted: 04/30/2019] [Indexed: 01/10/2023]
Abstract
Residual cancer burden (RCB) has been proposed to measure the postneoadjuvant breast cancer response. In the workflow of RCB assessment, estimation of cancer cellularity is a critical task, which is conventionally achieved by manually reviewing the hematoxylin and eosin- (H&E-) stained microscopic slides of cancer sections. In this work, we develop an automatic and direct method to estimate cellularity from histopathological image patches using deep feature representation, tree boosting, and support vector machine (SVM), avoiding the segmentation and classification of nuclei. Using a training set of 2394 patches and a test set of 185 patches, the estimations by our method show strong correlation to those by the human pathologists in terms of intraclass correlation (ICC) (0.94 with 95% CI of (0.93, 0.96)), Kendall's tau (0.83 with 95% CI of (0.79, 0.86)), and the prediction probability (0.93 with 95% CI of (0.91, 0.94)), compared to two other methods (ICC of 0.74 with 95% CI of (0.70, 0.77) and 0.83 with 95% CI of (0.79, 0.86)). Our method improves the accuracy and does not rely on annotations of individual nucleus.
Collapse
Affiliation(s)
- Ziang Pei
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| | - Shuangliang Cao
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| | - Wufan Chen
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
766
|
Abdelhafiz D, Yang C, Ammar R, Nabavi S. Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinformatics 2019; 20:281. [PMID: 31167642 PMCID: PMC6551243 DOI: 10.1186/s12859-019-2823-4] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND The limitations of traditional computer-aided detection (CAD) systems for mammography, the extreme importance of early detection of breast cancer and the high impact of the false diagnosis of patients drive researchers to investigate deep learning (DL) methods for mammograms (MGs). Recent breakthroughs in DL, in particular, convolutional neural networks (CNNs) have achieved remarkable advances in the medical fields. Specifically, CNNs are used in mammography for lesion localization and detection, risk assessment, image retrieval, and classification tasks. CNNs also help radiologists providing more accurate diagnosis by delivering precise quantitative analysis of suspicious lesions. RESULTS In this survey, we conducted a detailed review of the strengths, limitations, and performance of the most recent CNNs applications in analyzing MG images. It summarizes 83 research studies for applying CNNs on various tasks in mammography. It focuses on finding the best practices used in these research studies to improve the diagnosis accuracy. This survey also provides a deep insight into the architecture of CNNs used for various tasks. Furthermore, it describes the most common publicly available MG repositories and highlights their main features and strengths. CONCLUSIONS The mammography research community can utilize this survey as a basis for their current and future studies. The given comparison among common publicly available MG repositories guides the community to select the most appropriate database for their application(s). Moreover, this survey lists the best practices that improve the performance of CNNs including the pre-processing of images and the use of multi-view images. In addition, other listed techniques like transfer learning (TL), data augmentation, batch normalization, and dropout are appealing solutions to reduce overfitting and increase the generalization of the CNN models. Finally, this survey identifies the research challenges and directions that require further investigations by the community.
Collapse
Affiliation(s)
- Dina Abdelhafiz
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
- The Informatics Research Institute (IRI), City of Scientific Research and Technological Application (SRTA-City), New Borg El-Arab, Egypt
| | - Clifford Yang
- Department of Diagnostic Imaging, University of Connecticut Health Center, Farmington, 06030 CT USA
| | - Reda Ammar
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| |
Collapse
|
767
|
Liu M, Jiang J, Wang Z. Colonic Polyp Detection in Endoscopic Videos With Single Shot Detection Based Deep Convolutional Neural Network. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2019; 7:75058-75066. [PMID: 33604228 PMCID: PMC7889061 DOI: 10.1109/access.2019.2921027] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A major rise in the prevalence and influence of colorectal cancer (CRC) leads to substantially increasing healthcare costs and even death. It is widely accepted that early detection and removal of colonic polyps can prevent CRC. Detection of colonic polyps in colonoscopy videos is problematic because of complex environment of colon and various shapes of polyps. Currently, researchers indicate feasibility of Convolutional Neural Network (CNN)-based detection of polyps but better feature extractors are needed to improve detection performance. In this paper, we investigated the potential of the single shot detector (SSD) framework for detecting polyps in colonoscopy videos. SSD is a one-stage method, which uses a feed-forward CNN to produce a collection of fixed-size bounding boxes for each object from different feature maps. Three different feature extractors, including ResNet50, VGG16, and InceptionV3 were assessed. Multi-scale feature maps integrated into SSD were designed for ResNet50 and InceptionV3, respectively. We validated this method on the 2015 MICCAI polyp detection challenge datasets, compared it with teams attended the challenge, YOLOV3 and two-stage method, Faster-RCNN. Our results demonstrated that the proposed method surpassed all the teams in MICCAI challenge and YOLOV3 and was comparable with two-stage method. Especially in detection speed aspect, our proposed method outperformed all the methods, met real-time application requirement. Meanwhile, we also indicated that among all the feature extractors, InceptionV3 obtained the best result of precision and recall. In conclusion, SSD- based method achieved excellent detection performance in polyp detection and can potentially improve diagnostic accuracy and efficiency.
Collapse
Affiliation(s)
- Ming Liu
- Hunan Key Laboratory of Nonferrous Resources and Geological Hazard Exploration, Changsha 410083, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Zenan Wang
- Department of Gastroenterology, Beijing Chaoyang Hospital, the Third Clinical Medical College of Capital Medical University, Beijing 100020, China
| |
Collapse
|
768
|
Pansombut T, Wikaisuksakul S, Khongkraphan K, Phon-On A. Convolutional Neural Networks for Recognition of Lymphoblast Cell Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:7519603. [PMID: 31281337 PMCID: PMC6589284 DOI: 10.1155/2019/7519603] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Revised: 03/02/2019] [Accepted: 05/07/2019] [Indexed: 11/17/2022]
Abstract
This paper presents the recognition for WHO classification of acute lymphoblastic leukaemia (ALL) subtypes. The two ALL subtypes considered are T-lymphoblastic leukaemia (pre-T) and B-lymphoblastic leukaemia (pre-B). They exhibit various characteristics which make it difficult to distinguish between subtypes from their mature cells, lymphocytes. In a common approach, handcrafted features must be well designed for this complex domain-specific problem. With deep learning approach, handcrafted feature engineering can be eliminated because a deep learning method can automate this task through the multilayer architecture of a convolutional neural network (CNN). In this work, we implement a CNN classifier to explore the feasibility of deep learning approach to identify lymphocytes and ALL subtypes, and this approach is benchmarked against a dominant approach of support vector machines (SVMs) applying handcrafted feature engineering. Additionally, two traditional machine learning classifiers, multilayer perceptron (MLP), and random forest are also applied for the comparison. The experiments show that our CNN classifier delivers better performance to identify normal lymphocytes and pre-B cells. This shows a great potential for image classification with no requirement of multiple preprocessing steps from feature engineering.
Collapse
Affiliation(s)
- Tatdow Pansombut
- Department of Mathematics and Computer Science, Faculty of Science and Technology, Prince of Songkla University, Pattani 94000, Thailand
| | - Siripen Wikaisuksakul
- Department of Mathematics and Computer Science, Faculty of Science and Technology, Prince of Songkla University, Pattani 94000, Thailand
| | - Kittiya Khongkraphan
- Department of Mathematics and Computer Science, Faculty of Science and Technology, Prince of Songkla University, Pattani 94000, Thailand
| | - Aniruth Phon-On
- Department of Mathematics and Computer Science, Faculty of Science and Technology, Prince of Songkla University, Pattani 94000, Thailand
| |
Collapse
|
769
|
Convolutional Neural Networks for Recognition of Lymphoblast Cell Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019. [DOI: 10.1155/2019/7519603 10.1155/2019/7519603] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper presents the recognition for WHO classification of acute lymphoblastic leukaemia (ALL) subtypes. The two ALL subtypes considered are T-lymphoblastic leukaemia (pre-T) and B-lymphoblastic leukaemia (pre-B). They exhibit various characteristics which make it difficult to distinguish between subtypes from their mature cells, lymphocytes. In a common approach, handcrafted features must be well designed for this complex domain-specific problem. With deep learning approach, handcrafted feature engineering can be eliminated because a deep learning method can automate this task through the multilayer architecture of a convolutional neural network (CNN). In this work, we implement a CNN classifier to explore the feasibility of deep learning approach to identify lymphocytes and ALL subtypes, and this approach is benchmarked against a dominant approach of support vector machines (SVMs) applying handcrafted feature engineering. Additionally, two traditional machine learning classifiers, multilayer perceptron (MLP), and random forest are also applied for the comparison. The experiments show that our CNN classifier delivers better performance to identify normal lymphocytes and pre-B cells. This shows a great potential for image classification with no requirement of multiple preprocessing steps from feature engineering.
Collapse
|
770
|
Zhong X, Cao R, Shakeri S, Scalzo F, Lee Y, Enzmann DR, Wu HH, Raman SS, Sung K. Deep transfer learning-based prostate cancer classification using 3 Tesla multi-parametric MRI. Abdom Radiol (NY) 2019; 44:2030-2039. [PMID: 30460529 DOI: 10.1007/s00261-018-1824-5] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
PURPOSE The purpose of the study was to propose a deep transfer learning (DTL)-based model to distinguish indolent from clinically significant prostate cancer (PCa) lesions and to compare the DTL-based model with a deep learning (DL) model without transfer learning and PIRADS v2 score on 3 Tesla multi-parametric MRI (3T mp-MRI) with whole-mount histopathology (WMHP) validation. METHODS With IRB approval, 140 patients with 3T mp-MRI and WMHP comprised the study cohort. The DTL-based model was trained on 169 lesions in 110 arbitrarily selected patients and tested on the remaining 47 lesions in 30 patients. We compared the DTL-based model with the same DL model architecture trained from scratch and the classification based on PIRADS v2 score with a threshold of 4 using accuracy, sensitivity, specificity, and area under curve (AUC). Bootstrapping with 2000 resamples was performed to estimate the 95% confidence interval (CI) for AUC. RESULTS After training on 169 lesions in 110 patients, the AUC of discriminating indolent from clinically significant PCa lesions of the DTL-based model, DL model without transfer learning and PIRADS v2 score ≥ 4 were 0.726 (CI [0.575, 0.876]), 0.687 (CI [0.532, 0.843]), and 0.711 (CI [0.575, 0.847]), respectively, in the testing set. The DTL-based model achieved higher AUC compared to the DL model without transfer learning and PIRADS v2 score ≥ 4 in discriminating clinically significant lesions in the testing set. CONCLUSION The DeLong test indicated that the DTL-based model achieved comparable AUC compared to the classification based on PIRADS v2 score (p = 0.89).
Collapse
Affiliation(s)
- Xinran Zhong
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA.
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA.
| | - Ruiming Cao
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
- Department of Computer Science, School of Engineering, University of California, Los Angeles, Los Angeles, CA, USA
| | - Sepideh Shakeri
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Fabien Scalzo
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Yeejin Lee
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Dieter R Enzmann
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Holden H Wu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Steven S Raman
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Kyunghyun Sung
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
771
|
Schorb M, Haberbosch I, Hagen WJH, Schwab Y, Mastronarde DN. Software tools for automated transmission electron microscopy. Nat Methods 2019; 16:471-477. [PMID: 31086343 PMCID: PMC7000238 DOI: 10.1038/s41592-019-0396-9] [Citation(s) in RCA: 269] [Impact Index Per Article: 53.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 03/15/2019] [Indexed: 11/09/2022]
Abstract
The demand for high-throughput data collection in electron microscopy is increasing for applications in structural and cellular biology. Here we present a combination of software tools that enable automated acquisition guided by image analysis for a variety of transmission electron microscopy acquisition schemes. SerialEM controls microscopes and detectors and can trigger automated tasks at multiple positions with high flexibility. Py-EM interfaces with SerialEM to enact specimen-specific image-analysis pipelines that enable feedback microscopy. As example applications, we demonstrate dose reduction in cryo-electron microscopy experiments, fully automated acquisition of every cell in a plastic section and automated targeting on serial sections for 3D volume imaging across multiple grids.
Collapse
Affiliation(s)
- Martin Schorb
- Electron Microscopy Core Facility, EMBL, Heidelberg, Germany.
| | - Isabella Haberbosch
- Department of Hematology, Oncology and Rheumatology, University Hospital Heidelberg, Heidelberg Research Center for Molecular Medicine, EMBL, Heidelberg, Germany
- Cell Biology and Biophysics Unit, EMBL, Heidelberg, Germany
| | - Wim J H Hagen
- Structural and Computational Biology Unit and Cryo-Electron Microscopy Service Platform, EMBL, Heidelberg, Germany
| | - Yannick Schwab
- Electron Microscopy Core Facility, EMBL, Heidelberg, Germany
- Cell Biology and Biophysics Unit, EMBL, Heidelberg, Germany
| | - David N Mastronarde
- Department of Molecular, Cellular & Developmental Biology, University of Colorado, Boulder, CO, USA.
| |
Collapse
|
772
|
Aresta G, Araújo T, Kwok S, Chennamsetty SS, Safwan M, Alex V, Marami B, Prastawa M, Chan M, Donovan M, Fernandez G, Zeineh J, Kohl M, Walz C, Ludwig F, Braunewell S, Baust M, Vu QD, To MNN, Kim E, Kwak JT, Galal S, Sanchez-Freire V, Brancati N, Frucci M, Riccio D, Wang Y, Sun L, Ma K, Fang J, Kone I, Boulmane L, Campilho A, Eloy C, Polónia A, Aguiar P. BACH: Grand challenge on breast cancer histology images. Med Image Anal 2019; 56:122-139. [PMID: 31226662 DOI: 10.1016/j.media.2019.05.010] [Citation(s) in RCA: 193] [Impact Index Per Article: 38.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 05/28/2019] [Accepted: 05/29/2019] [Indexed: 01/22/2023]
Abstract
Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time- and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.
Collapse
Affiliation(s)
- Guilherme Aresta
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto 4200-465, Portugal; Faculty of Engineering of University of Porto, Porto 4200-465, Portugal.
| | - Teresa Araújo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto 4200-465, Portugal; Faculty of Engineering of University of Porto, Porto 4200-465, Portugal.
| | | | | | | | | | - Bahram Marami
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | - Marcel Prastawa
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | - Monica Chan
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | - Michael Donovan
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | - Gerardo Fernandez
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | - Jack Zeineh
- The Center for Computational and Systems Pathology, Department of Pathology, Icahn School of Medicine at Mount Sinai and The Mount Sinai Hospital, New York, USA
| | | | - Christoph Walz
- Institute of Pathology, Faculty of Medicine, LMU Munich, Munich, Germany
| | | | | | | | - Quoc Dang Vu
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
| | - Minh Nguyen Nhat To
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
| | - Eal Kim
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
| | | | | | - Nadia Brancati
- Institute for High Performance Computing and Networking, National Research Council of Italy (ICAR-CNR), Naples, Italy
| | - Maria Frucci
- Institute for High Performance Computing and Networking, National Research Council of Italy (ICAR-CNR), Naples, Italy
| | - Daniel Riccio
- Institute for High Performance Computing and Networking, National Research Council of Italy (ICAR-CNR), Naples, Italy; University of Naples "Federico II", Naples, Italy
| | - Yaqi Wang
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Lingling Sun
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China; Zhejiang Provincial Laboratory of Integrated Circuits Design, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Kaiqiang Ma
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Jiannan Fang
- Key Laboratory of RF Circuits and Systems, Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Ismael Kone
- 2MIA Research Group, LEM2A Lab, Faculté des Sciences, Université Moulay Ismail, Meknes, Morocco
| | - Lahsen Boulmane
- 2MIA Research Group, LEM2A Lab, Faculté des Sciences, Université Moulay Ismail, Meknes, Morocco
| | - Aurélio Campilho
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto 4200-465, Portugal; Faculty of Engineering of University of Porto, Porto 4200-465, Portugal
| | - Catarina Eloy
- Laboratório de Anatomia Patológica, Ipatimup Diagnósticos, Rua Júilio Amaral de Carvalho, Porto 45, 4200-135, Portugal; Faculdade de Medicina, Universidade do Porto, Alameda Prof Hernâni Monteiro, Porto 4200-319, Portugal; Instituto de Investigação e Inovação em Saúde (i3S), Universidade do Porto, Rua Alfredo Allen, 208, Porto 4200-135, Portugal
| | - António Polónia
- Laboratório de Anatomia Patológica, Ipatimup Diagnósticos, Rua Júilio Amaral de Carvalho, Porto 45, 4200-135, Portugal; Faculdade de Medicina, Universidade do Porto, Alameda Prof Hernâni Monteiro, Porto 4200-319, Portugal; Instituto de Investigação e Inovação em Saúde (i3S), Universidade do Porto, Rua Alfredo Allen, 208, Porto 4200-135, Portugal.
| | - Paulo Aguiar
- Instituto de Investigação e Inovação em Saúde (i3S), Universidade do Porto, Rua Alfredo Allen, 208, Porto 4200-135, Portugal; Instituto de Engenharia Biomédica (INEB), Universidade do Porto, Rua Alfredo Allen, 208, Porto 4200-135, Portugal.
| |
Collapse
|
773
|
Ker J, Bai Y, Lee HY, Rao J, Wang L. Automated brain histology classification using machine learning. J Clin Neurosci 2019; 66:239-245. [PMID: 31155342 DOI: 10.1016/j.jocn.2019.05.019] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 05/22/2019] [Indexed: 11/16/2022]
Abstract
Brain and breast tumors cause significant morbidity and mortality worldwide. Accurate and expedient histological diagnosis of patients' tumor specimens is required for subsequent treatment and prognostication. Currently, histology slides are visually inspected by trained pathologists, but this process is both time and labor-intensive. In this paper, we propose an automated process to classify histology slides of both brain and breast tissues using the Google Inception V3 convolutional neural network (CNN). We report successful automated classification of brain histology specimens into normal, low grade glioma (LGG) or high grade glioma (HGG). We also report for the first time the benefit of transfer learning across different tissue types. Pre-training on a brain tumor classification task improved CNN performance accuracy in a separate breast tumor classification task, with the F1 score improving from 0.547 to 0.913. We constructed a dataset using brain histology images from our own hospital and a public breast histology image dataset. Our proposed method can assist human pathologists in the triage and inspection of histology slides to expedite medical care. It can also improve CNN performance in cases where the training data is limited, for example in rare tumors, by applying the learned model weights from a more common tissue type.
Collapse
Affiliation(s)
- Justin Ker
- Department of Neurosurgery, National Neuroscience Institute, 308433, Singapore.
| | - Yeqi Bai
- School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore.
| | - Hwei Yee Lee
- Department of Pathology, Tan Tock Seng Hospital, 308433, Singapore.
| | - Jai Rao
- Department of Neurosurgery, National Neuroscience Institute, 308433, Singapore.
| | - Lipo Wang
- School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore.
| |
Collapse
|
774
|
Škrabánek P, Zahradníková A. Automatic assessment of the cardiomyocyte development stages from confocal microscopy images using deep convolutional networks. PLoS One 2019; 14:e0216720. [PMID: 31145728 PMCID: PMC6542571 DOI: 10.1371/journal.pone.0216720] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 04/26/2019] [Indexed: 11/23/2022] Open
Abstract
Computer assisted image acquisition techniques, including confocal microscopy, require efficient tools for an automatic sorting of vast amount of generated image data. The complexity of the classification process, absence of adequate tools, and insufficient amount of reference data has made the automated processing of images challenging. Mastering of this issue would allow implementation of statistical analysis in research areas such as in research on formation of t-tubules in cardiac myocytes. We developed a system aimed at automatic assessment of cardiomyocyte development stages (SAACS). The system classifies confocal images of cardiomyocytes with fluorescent dye stained sarcolemma. We based SAACS on a densely connected convolutional network (DenseNet) topology. We created a set of labelled source images, proposed an appropriate data augmentation technique and designed a class probability graph. We showed that the DenseNet topology, in combination with the augmentation technique is suitable for the given task, and that high-resolution images are instrumental for image categorization. SAACS, in combination with the automatic high-throughput confocal imaging, will allow application of statistical analysis in the research of the tubular system development or remodelling and loss.
Collapse
Affiliation(s)
- Pavel Škrabánek
- Institute of Automation and Computer Science, Brno University of Technology, Brno, Czech Republic
| | - Alexandra Zahradníková
- Institute of Molecular Physiology and Genetics, Centre of Biosciences SAS, Bratislava, Slovakia
- Department of Cellular Cardiology, Inst. of Experimental Endocrinology, Biomedical Research Center SAS, Bratislava, Slovakia
| |
Collapse
|
775
|
Deep transfer learning methods for colon cancer classification in confocal laser microscopy images. Int J Comput Assist Radiol Surg 2019; 14:1837-1845. [PMID: 31129859 DOI: 10.1007/s11548-019-02004-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 05/20/2019] [Indexed: 02/07/2023]
Abstract
PURPOSE The gold standard for colorectal cancer metastases detection in the peritoneum is histological evaluation of a removed tissue sample. For feedback during interventions, real-time in vivo imaging with confocal laser microscopy has been proposed for differentiation of benign and malignant tissue by manual expert evaluation. Automatic image classification could improve the surgical workflow further by providing immediate feedback. METHODS We analyze the feasibility of classifying tissue from confocal laser microscopy in the colon and peritoneum. For this purpose, we adopt both classical and state-of-the-art convolutional neural networks to directly learn from the images. As the available dataset is small, we investigate several transfer learning strategies including partial freezing variants and full fine-tuning. We address the distinction of different tissue types, as well as benign and malignant tissue. RESULTS We present a thorough analysis of transfer learning strategies for colorectal cancer with confocal laser microscopy. In the peritoneum, metastases are classified with an AUC of 97.1, and in the colon the primarius is classified with an AUC of 73.1. In general, transfer learning substantially improves performance over training from scratch. We find that the optimal transfer learning strategy differs for models and classification tasks. CONCLUSIONS We demonstrate that convolutional neural networks and transfer learning can be used to identify cancer tissue with confocal laser microscopy. We show that there is no generally optimal transfer learning strategy and model as well as task-specific engineering is required. Given the high performance for the peritoneum, even with a small dataset, application for intraoperative decision support could be feasible.
Collapse
|
776
|
Machine Learning Prediction of Liver Stiffness Using Clinical and T2-Weighted MRI Radiomic Data. AJR Am J Roentgenol 2019; 213:592-601. [PMID: 31120779 DOI: 10.2214/ajr.19.21082] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE. The purpose of this study is to develop a machine learning model to categorically classify MR elastography (MRE)-derived liver stiffness using clinical and nonelastographic MRI radiomic features in pediatric and young adult patients with known or suspected liver disease. MATERIALS AND METHODS. Clinical data (27 demographic, anthropomorphic, medical history, and laboratory features), MRI presence of liver fat and chemical shift-encoded fat fraction, and MRE mean liver stiffness measurements were retrieved from electronic medical records. MRI radiomic data (105 features) were extracted from T2-weighted fast spin-echo images. Patients were categorized by mean liver stiffness (< 3 vs ≥ 3 kPa). Support vector machine (SVM) models were used to perform two-class classification using clinical features, radiomic features, and both clinical and radiomic features. Our proposed model was internally evaluated in 225 patients (mean age, 14.1 years) and externally evaluated in an independent cohort of 84 patients (mean age, 13.7 years). Diagnostic performance was assessed using ROC AUC values. RESULTS. In our internal cross-validation model, the combination of clinical and radiomic features produced the best performance (AUC = 0.84), compared with clinical (AUC = 0.77) or radiomic (AUC = 0.70) features alone. Using both clinical and radiomic features, the SVM model was able to correctly classify patients with accuracy of 81.8%, sensitivity of 72.2%, and specificity of 87.0%. In our external validation experiment, this SVM model achieved an accuracy of 75.0%, sensitivity of 63.6%, specificity of 82.4%, and AUC of 0.80. CONCLUSION. An SVM learning model incorporating clinical and T2-weighted radiomic features has fair-to-good diagnostic performance for categorically classifying liver stiffness.
Collapse
|
777
|
Hosny KM, Kassem MA, Foaud MM. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS One 2019; 14:e0217293. [PMID: 31112591 PMCID: PMC6529006 DOI: 10.1371/journal.pone.0217293] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 05/08/2019] [Indexed: 11/19/2022] Open
Abstract
Skin cancer is one of most deadly diseases in humans. According to the high similarity between melanoma and nevus lesions, physicians take much more time to investigate these lesions. The automated classification of skin lesions will save effort, time and human life. The purpose of this paper is to present an automatic skin lesions classification system with higher classification rate using the theory of transfer learning and the pre-trained deep neural network. The transfer learning has been applied to the Alex-net in different ways, including fine-tuning the weights of the architecture, replacing the classification layer with a softmax layer that works with two or three kinds of skin lesions, and augmenting dataset by fixed and random rotation angles. The new softmax layer has the ability to classify the segmented color image lesions into melanoma and nevus or into melanoma, seborrheic keratosis, and nevus. The three well-known datasets, MED-NODE, Derm (IS & Quest) and ISIC, are used in testing and verifying the proposed method. The proposed DCNN weights have been fine-tuned using the training and testing dataset from ISIC in addition to 10-fold cross validation for MED-NODE and DermIS—DermQuest. The accuracy, sensitivity, specificity, and precision measures are used to evaluate the performance of the proposed method and the existing methods. For the datasets, MED-NODE, Derm (IS & Quest) and ISIC, the proposed method has achieved accuracy percentages of 96.86%, 97.70%, and 95.91% respectively. The performance of the proposed method has outperformed the performance of the existing classification methods of skin cancer.
Collapse
Affiliation(s)
- Khalid M. Hosny
- Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig, Egypt
- * E-mail: , ,
| | | | - Mohamed M. Foaud
- Department of Electronics and Communication, Faculty of Engineering, Zagazig University, Zagazig, Egypt
| |
Collapse
|
778
|
A Comparative Study of Texture and Convolutional Neural Network Features for Detecting Collapsed Buildings After Earthquakes Using Pre- and Post-Event Satellite Imagery. REMOTE SENSING 2019. [DOI: 10.3390/rs11101202] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The accurate and quick derivation of the distribution of damaged building must be considered essential for the emergency response. With the success of deep learning, there is an increasing interest to apply it for earthquake-induced building damage mapping, and its performance has not been compared with conventional methods in detecting building damage after the earthquake. In the present study, the performance of grey-level co-occurrence matrix texture and convolutional neural network (CNN) features were comparatively evaluated with the random forest classifier. Pre- and post-event very high-resolution (VHR) remote sensing imagery were considered to identify collapsed buildings after the 2010 Haiti earthquake. Overall accuracy (OA), allocation disagreement (AD), quantity disagreement (QD), Kappa, user accuracy (UA), and producer accuracy (PA) were used as the evaluation metrics. The results showed that the CNN feature with random forest method had the best performance, achieving an OA of 87.6% and a total disagreement of 12.4%. CNNs have the potential to extract deep features for identifying collapsed buildings compared to the texture feature with random forest method by increasing Kappa from 61.7% to 69.5% and reducing the total disagreement from 16.6% to 14.1%. The accuracy for identifying buildings was improved by combining CNN features with random forest compared with the CNN approach. OA increased from 85.9% to 87.6%, and the total disagreement reduced from 14.1% to 12.4%. The results indicate that the learnt CNN features can outperform texture features for identifying collapsed buildings using VHR remotely sensed space imagery.
Collapse
|
779
|
Das PK, Meher S, Panda R, Abraham A. A Review of Automated Methods for the Detection of Sickle Cell Disease. IEEE Rev Biomed Eng 2019; 13:309-324. [PMID: 31107662 DOI: 10.1109/rbme.2019.2917780] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Detection of sickle cell disease is a crucial job in medical image analysis. It emphasizes elaborate analysis of proper disease diagnosis after accurate detection followed by a classification of irregularities, which plays a vital role in the sickle cell disease diagnosis, treatment planning, and treatment outcome evaluation. Proper segmentation of complex cell clusters makes sickle cell detection more accurate and robust. Cell morphology has a key role in the detection of the sickle cell because the shapes of the normal blood cell and sickle cell differ significantly. This review emphasizes state-of-the-art methods and recent advances in detection, segmentation, and classification of sickle cell disease. We discuss key challenges encountered during the segmentation of overlapping blood cells. Moreover, standard validation measures that have been employed to yield performance analysis of various methods are also discussed. The methodologies and experiments in this review will be useful to further research and work in this area.
Collapse
|
780
|
Kokil P, Sudharson S. Automatic Detection of Renal Abnormalities by Off-the-shelf CNN Features. ACTA ACUST UNITED AC 2019. [DOI: 10.1080/09747338.2019.1613936] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Affiliation(s)
- Priyanka Kokil
- Department of Electronics and Communication Engineering, Indian Institute of Information Technology, Design and Manufacturing, Kancheepuram, 600127 Chennai, India
| | - S. Sudharson
- Department of Electronics and Communication Engineering, Indian Institute of Information Technology, Design and Manufacturing, Kancheepuram, 600127 Chennai, India
| |
Collapse
|
781
|
A Hybridized ELM for Automatic Micro Calcification Detection in Mammogram Images Based on Multi-Scale Features. J Med Syst 2019; 43:183. [PMID: 31093789 DOI: 10.1007/s10916-019-1316-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 04/25/2019] [Indexed: 01/27/2023]
Abstract
Detection of masses and micro calcifications are a stimulating task for radiologists in digital mammogram images. Radiologists using Computer Aided Detection (CAD) frameworks to find the breast lesion. Micro calcification may be the early sign of breast cancer. There are different kinds of methods used to detect and recognize micro calcification from mammogram images. This paper presents an ELM (Extreme Learning Machine) algorithm for micro calcification detection in digital mammogram images. The interference of mammographic image is removed at the pre-processing stages. A multi-scale features are extracted by a feature generation model. The performance did not improve by all extracted feature, therefore feature selection is performed by nature-inspired optimization algorithm. At last, the hybridized ELM classifier taken the selected optimal features to classify malignant from benign micro calcifications. The proposed work is compared with various classifiers and it shown better performance in training time, sensitivity, specificity and accuracy. The existing approaches considered here are SVM (Support Vector Machine) and NB (Naïve Bayes classifier). The proposed detection system provides 99.04% accuracy which is the better performance than the existing approaches. The optimal selection of feature vectors and the efficient classifier improves the performance of proposed system. Results illustrate the classification performance is better when compared with several other classification approaches.
Collapse
|
782
|
Li H, Li A, Wang M. A novel end-to-end brain tumor segmentation method using improved fully convolutional networks. Comput Biol Med 2019; 108:150-160. [DOI: 10.1016/j.compbiomed.2019.03.014] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Revised: 03/13/2019] [Accepted: 03/14/2019] [Indexed: 12/28/2022]
|
783
|
Gatos I, Tsantis S, Spiliopoulos S, Karnabatidis D, Theotokas I, Zoumpoulis P, Loupas T, Hazle JD, Kagadis GC. Temporal stability assessment in shear wave elasticity images validated by deep learning neural network for chronic liver disease fibrosis stage assessment. Med Phys 2019; 46:2298-2309. [PMID: 30929260 DOI: 10.1002/mp.13521] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 03/27/2019] [Accepted: 03/27/2019] [Indexed: 02/05/2023] Open
Abstract
PURPOSE To automatically detect and isolate areas of low and high stiffness temporal stability in shear wave elastography (SWE) image sequences and define their impact in chronic liver disease (CLD) diagnosis improvement by means of clinical examination study and deep learning algorithm employing convolutional neural networks (CNNs). MATERIALS AND METHODS Two hundred SWE image sequences from 88 healthy individuals (F0 fibrosis stage) and 112 CLD patients (46 with mild fibrosis (F1), 16 with significant fibrosis (F2), 22 with severe fibrosis (F3), and 28 with cirrhosis (F4)) were analyzed to detect temporal stiffness stability between frames. An inverse Red, Green, Blue (RGB) colormap-to-stiffness process was performed for each image sequence, followed by a wavelet transform and fuzzy c-means clustering algorithm. This resulted in a binary mask depicting areas of high and low stiffness temporal stability. The mask was then applied to the first image of the SWE sequence, and the derived, masked SWE image was used to estimate its impact in standard clinical examination and CNN classification. Regarding the impact of the masked SWE image in clinical examination, one measurement by two radiologists was performed in each SWE image and two in the corresponding masked image measuring areas with high and low stiffness temporal stability. Then, stiffness stability parameters, interobserver variability evaluation and diagnostic performance by means of ROC analysis were assessed. The masked and unmasked sets of SWE images were fed into a CNN scheme for comparison. RESULTS The clinical impact evaluation study showed that the masked SWE images decreased the interobserver variability of the radiologists' measurements in the high stiffness temporal stability areas (interclass correlation coefficient (ICC) = 0.92) compared to the corresponding unmasked ones (ICC = 0.76). In terms of diagnostic accuracy, measurements in the high-stability areas of the masked SWE images (area-under-the-curve (AUC) ranging from 0.800 to 0.851) performed similarly to those in the unmasked SWE images (AUC ranging from 0.805 to 0.893). Regarding the measurements in the low stiffness temporal stability areas of the masked SWE images, results for interobserver variability (ICC = 0.63) and diagnostic accuracy (AUC ranging from 0.622 to 0.791) were poor. Regarding the CNN classification, the masked SWE images showed improved accuracy (ranging from 82.5% to 95.5%) compared to the unmasked ones (ranging from 79.5% to 93.2%) for various CLD stage combinations. CONCLUSION Our detection algorithm excludes unreliable areas in SWE images, reduces interobserver variability, and augments CNN's accuracy scores for many combinations of fibrosis stages.
Collapse
Affiliation(s)
- Ilias Gatos
- Department of Medical Physics, University of Patras, Rion, GR, 26504, Greece
| | - Stavros Tsantis
- Department of Medical Physics, University of Patras, Rion, GR, 26504, Greece
| | - Stavros Spiliopoulos
- 2nd Department of Radiology, School of Medicine, University of Athens, Athens, GR, 12461, Greece
| | - Dimitris Karnabatidis
- Department of Radiology, School of Medicine, University of Patras, Rion, GR, 26504, Greece
| | - Ioannis Theotokas
- Diagnostic Echotomography SA, 317C Kifissias Ave., GR, 14561, Kifissia, Greece
| | - Pavlos Zoumpoulis
- Diagnostic Echotomography SA, 317C Kifissias Ave., GR, 14561, Kifissia, Greece
| | - Thanasis Loupas
- Philips Ultrasound, 22100 Bothell Everett Hwy, Bothell, WA, 98021, USA
| | - John D Hazle
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, 77030, USA
| | - George C Kagadis
- Department of Medical Physics, University of Patras, Rion, GR, 26504, Greece
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, 77030, USA
| |
Collapse
|
784
|
Supervised Domain Adaptation for Automatic Sub-cortical Brain Structure Segmentation with Minimal User Interaction. Sci Rep 2019; 9:6742. [PMID: 31043688 PMCID: PMC6494835 DOI: 10.1038/s41598-019-43299-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Accepted: 04/15/2019] [Indexed: 01/19/2023] Open
Abstract
In recent years, some convolutional neural networks (CNNs) have been proposed to segment sub-cortical brain structures from magnetic resonance images (MRIs). Although these methods provide accurate segmentation, there is a reproducibility issue regarding segmenting MRI volumes from different image domains - e.g., differences in protocol, scanner, and intensity profile. Thus, the network must be retrained from scratch to perform similarly in different imaging domains, limiting the applicability of such methods in clinical settings. In this paper, we employ the transfer learning strategy to solve the domain shift problem. We reduced the number of training images by leveraging the knowledge obtained by a pretrained network, and improved the training speed by reducing the number of trainable parameters of the CNN. We tested our method on two publicly available datasets - MICCAI 2012 and IBSR - and compared them with a commonly used approach: FIRST. Our method showed similar results to those obtained by a fully trained CNN, and our method used a remarkably smaller number of images from the target domain. Moreover, training the network with only one image from MICCAI 2012 and three images from IBSR datasets was sufficient to significantly outperform FIRST with (p < 0.001) and (p < 0.05), respectively.
Collapse
|
785
|
Bird P. Imaging in the Mobile Domain. Rheum Dis Clin North Am 2019; 45:291-302. [DOI: 10.1016/j.rdc.2019.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
786
|
Liu F. SUSAN: segment unannotated image structure using adversarial network. Magn Reson Med 2019; 81:3330-3345. [PMID: 30536427 PMCID: PMC7140982 DOI: 10.1002/mrm.27627] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 11/13/2018] [Accepted: 11/13/2018] [Indexed: 12/20/2022]
Abstract
PURPOSE To describe and evaluate a segmentation method using joint adversarial and segmentation convolutional neural network to achieve accurate segmentation using unannotated MR image datasets. THEORY AND METHODS A segmentation pipeline was built using joint adversarial and segmentation network. A convolutional neural network technique called cycle-consistent generative adversarial network (CycleGAN) was applied as the core of the method to perform unpaired image-to-image translation between different MR image datasets. A joint segmentation network was incorporated into the adversarial network to obtain additional functionality for semantic segmentation. The fully automated segmentation method termed as SUSAN was tested for segmenting bone and cartilage on 2 clinical knee MR image datasets using images and annotated segmentation masks from an online publicly available knee MR image dataset. The segmentation results were compared using quantitative segmentation metrics with the results from a supervised U-Net segmentation method and 2 registration methods. The Wilcoxon signed-rank test was used to evaluate the value difference of quantitative metrics between different methods. RESULTS The proposed method SUSAN provided high segmentation accuracy with results comparable to the supervised U-Net segmentation method (most quantitative metrics having P > 0.05) and significantly better than a multiatlas registration method (all quantitative metrics having P < 0.001) and a direct registration method (all quantitative metrics having P< 0.0001) for the clinical knee image datasets. SUSAN also demonstrated the applicability for segmenting knee MR images with different tissue contrasts. CONCLUSION SUSAN performed rapid and accurate tissue segmentation for multiple MR image datasets without the need for sequence specific segmentation annotation. The joint adversarial and segmentation network and training strategy have promising potential applications in medical image segmentation.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53705–2275
| |
Collapse
|
787
|
Abdolmanafi A, Duong L, Dahdah N, Cheriet F. Intra-Slice Motion Correction of Intravascular OCT Images Using Deep Features. IEEE J Biomed Health Inform 2019; 23:931-941. [DOI: 10.1109/jbhi.2018.2878914] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
788
|
Sridar P, Kumar A, Quinton A, Nanan R, Kim J, Krishnakumar R. Decision Fusion-Based Fetal Ultrasound Image Plane Classification Using Convolutional Neural Networks. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:1259-1273. [PMID: 30826153 DOI: 10.1016/j.ultrasmedbio.2018.11.016] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Revised: 11/26/2018] [Accepted: 11/29/2018] [Indexed: 06/09/2023]
Abstract
Machine learning for ultrasound image analysis and interpretation can be helpful in automated image classification in large-scale retrospective analyses to objectively derive new indicators of abnormal fetal development that are embedded in ultrasound images. Current approaches to automatic classification are limited to the use of either image patches (cropped images) or the global (whole) image. As many fetal organs have similar visual features, cropped images can misclassify certain structures such as the kidneys and abdomen. Also, the whole image does not encode sufficient local information about structures to identify different structures in different locations. Here we propose a method to automatically classify 14 different fetal structures in 2-D fetal ultrasound images by fusing information from both cropped regions of fetal structures and the whole image. Our method trains two feature extractors by fine-tuning pre-trained convolutional neural networks with the whole ultrasound fetal images and the discriminant regions of the fetal structures found in the whole image. The novelty of our method is in integrating the classification decisions made from the global and local features without relying on priors. In addition, our method can use the classification outcome to localize the fetal structures in the image. Our experiments on a data set of 4074 2-D ultrasound images (training: 3109, test: 965) achieved a mean accuracy of 97.05%, mean precision of 76.47% and mean recall of 75.41%. The Cohen κ of 0.72 revealed the highest agreement between the ground truth and the proposed method. The superiority of the proposed method over the other non-fusion-based methods is statistically significant (p < 0.05). We found that our method is capable of predicting images without ultrasound scanner overlays with a mean accuracy of 92%. The proposed method can be leveraged to retrospectively classify any ultrasound images in clinical research.
Collapse
Affiliation(s)
- Pradeeba Sridar
- Department of Engineering Design, Indian Institute of Technology Madras, India; School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Ashnil Kumar
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Ann Quinton
- Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia
| | - Ralph Nanan
- Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | | |
Collapse
|
789
|
Chalakkal RJ, Abdulla WH, Thulaseedharan SS. Quality and content analysis of fundus images using deep learning. Comput Biol Med 2019; 108:317-331. [DOI: 10.1016/j.compbiomed.2019.03.019] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 03/20/2019] [Accepted: 03/21/2019] [Indexed: 11/28/2022]
|
790
|
Mylonas A, Keall PJ, Booth JT, Shieh CC, Eade T, Poulsen PR, Nguyen DT. A deep learning framework for automatic detection of arbitrarily shaped fiducial markers in intrafraction fluoroscopic images. Med Phys 2019; 46:2286-2297. [PMID: 30929254 DOI: 10.1002/mp.13519] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 01/24/2019] [Accepted: 03/16/2019] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Real-time image-guided adaptive radiation therapy (IGART) requires accurate marker segmentation to resolve three-dimensional (3D) motion based on two-dimensional (2D) fluoroscopic images. Most common marker segmentation methods require prior knowledge of marker properties to construct a template. If marker properties are not known, an additional learning period is required to build the template which exposes the patient to an additional imaging dose. This work investigates a deep learning-based fiducial marker classifier for use in real-time IGART that requires no prior patient-specific data or additional learning periods. The proposed tracking system uses convolutional neural network (CNN) models to segment cylindrical and arbitrarily shaped fiducial markers. METHODS The tracking system uses a tracking window approach to perform sliding window classification of each implanted marker. Three cylindrical marker training datasets were generated from phantom kilovoltage (kV) and patient intrafraction images with increasing levels of megavoltage (MV) scatter. The cylindrical shaped marker CNNs were validated on unseen kV fluoroscopic images from 12 fractions of 10 prostate cancer patients with implanted gold fiducials. For the training and validation of the arbitrarily shaped marker CNNs, cone beam computed tomography (CBCT) projection images from ten fractions of seven lung cancer patients with implanted coiled markers were used. The arbitrarily shaped marker CNNs were trained using three patients and the other four unseen patients were used for validation. The effects of full training using a compact CNN (four layers with learnable weights) and transfer learning using a pretrained CNN (AlexNet, eight layers with learnable weights) were analyzed. Each CNN was evaluated using a Precision-Recall curve (PRC), the area under the PRC plot (AUC), and by the calculation of sensitivity and specificity. The tracking system was assessed using the validation data and the accuracy was quantified by calculating the mean error, root-mean-square error (RMSE) and the 1st and 99th percentiles of the error. RESULTS The fully trained CNN on the dataset with moderate noise levels had a sensitivity of 99.00% and specificity of 98.92%. Transfer learning of AlexNet resulted in a sensitivity and specificity of 99.42% and 98.13%, respectively, for the same datasets. For the arbitrarily shaped marker CNNs, the sensitivity was 98.58% and specificity was 98.97% for the fully trained CNN. The transfer learning CNN had a sensitivity and specificity of 98.49% and 99.56%, respectively. The CNNs were successfully incorporated into a multiple object tracking system for both cylindrical and arbitrarily shaped markers. The cylindrical shaped marker tracking had a mean RMSE of 1.6 ± 0.2 pixels and 1.3 ± 0.4 pixels in the x- and y-directions, respectively. The arbitrarily shaped marker tracking had a mean RMSE of 3.0 ± 0.5 pixels and 2.2 ± 0.4 pixels in the x- and y-directions, respectively. CONCLUSION With deep learning CNNs, high classification performances on unseen patient images were achieved for both cylindrical and arbitrarily shaped markers. Furthermore, the application of CNN models to intrafraction monitoring was demonstrated using a simple tracking system. The results demonstrate that CNN models can be used to track markers without prior knowledge of the marker properties or an additional learning period.
Collapse
Affiliation(s)
- Adam Mylonas
- Faculty of Medicine and Health, ACRF Image X Institute, The University of Sydney, Sydney, NSW, Australia
| | - Paul J Keall
- Faculty of Medicine and Health, ACRF Image X Institute, The University of Sydney, Sydney, NSW, Australia
| | - Jeremy T Booth
- Royal North Shore Hospital, Northern Sydney Cancer Centre, St Leonards, NSW, Australia
| | - Chun-Chien Shieh
- Faculty of Medicine and Health, ACRF Image X Institute, The University of Sydney, Sydney, NSW, Australia
| | - Thomas Eade
- Royal North Shore Hospital, Northern Sydney Cancer Centre, St Leonards, NSW, Australia
| | | | - Doan Trang Nguyen
- Faculty of Medicine and Health, ACRF Image X Institute, The University of Sydney, Sydney, NSW, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, NSW, Australia
| |
Collapse
|
791
|
Savadjiev P, Chong J, Dohan A, Agnus V, Forghani R, Reinhold C, Gallix B. Image-based biomarkers for solid tumor quantification. Eur Radiol 2019; 29:5431-5440. [PMID: 30963275 DOI: 10.1007/s00330-019-06169-w] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 02/25/2019] [Accepted: 03/14/2019] [Indexed: 02/06/2023]
Abstract
The last few decades have witnessed tremendous technological developments in image-based biomarkers for tumor quantification and characterization. Initially limited to manual one- and two-dimensional size measurements, image biomarkers have evolved to harness developments not only in image acquisition technology but also in image processing and analysis algorithms. At the same time, clinical validation remains a major challenge for the vast majority of these novel techniques, and there is still a major gap between the latest technological developments and image biomarkers used in everyday clinical practice. Currently, the imaging biomarker field is attracting increasing attention not only because of the tremendous interest in cutting-edge therapeutic developments and personalized medicine but also because of the recent progress in the application of artificial intelligence (AI) algorithms to large-scale datasets. Thus, the goal of the present article is to review the current state of the art for image biomarkers and their use for characterization and predictive quantification of solid tumors. Beginning with an overview of validated imaging biomarkers in current clinical practice, we proceed to a review of AI-based methods for tumor characterization, such as radiomics-based approaches and deep learning.Key Points• Recent years have seen tremendous technological developments in image-based biomarkers for tumor quantification and characterization.• Image-based biomarkers can be used on an ongoing basis, in a non-invasive (or mildly invasive) way, to monitor the development and progression of the disease or its response to therapy.• We review the current state of the art for image biomarkers, as well as the recent developments in artificial intelligence (AI) algorithms for image processing and analysis.
Collapse
Affiliation(s)
- Peter Savadjiev
- Department of Diagnostic Radiology, McGill University, Montreal, QC, Canada
| | - Jaron Chong
- Department of Diagnostic Radiology, McGill University Health Centre, McGill University, 1001 Décarie Boulevard, Montreal, QC, H4A 3J1, Canada
| | - Anthony Dohan
- Department of Diagnostic Radiology, McGill University Health Centre, McGill University, 1001 Décarie Boulevard, Montreal, QC, H4A 3J1, Canada.,Department of Body and Interventional Imaging, Hôpital Lariboisière-AP-HP, Université Diderot-Paris 7 and INSERM U965, 2 rue Ambroise Paré, 75475, Paris Cedex 10, France
| | - Vincent Agnus
- Institut de chirurgie guidée par l'image IHU Strasbourg, 1, place de l'Hôpital, 67091, Strasbourg Cedex, France
| | - Reza Forghani
- Department of Diagnostic Radiology, McGill University Health Centre, McGill University, 1001 Décarie Boulevard, Montreal, QC, H4A 3J1, Canada.,Department of Radiology, Jewish General Hospital, 3755 Chemin de la Côte-Sainte-Catherine, Montreal, QC, H3T 1E2, Canada
| | - Caroline Reinhold
- Department of Diagnostic Radiology, McGill University Health Centre, McGill University, 1001 Décarie Boulevard, Montreal, QC, H4A 3J1, Canada
| | - Benoit Gallix
- Department of Diagnostic Radiology, McGill University Health Centre, McGill University, 1001 Décarie Boulevard, Montreal, QC, H4A 3J1, Canada. .,Institut de chirurgie guidée par l'image IHU Strasbourg, 1, place de l'Hôpital, 67091, Strasbourg Cedex, France.
| |
Collapse
|
792
|
Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence. J Clin Med 2019; 8:jcm8040462. [PMID: 30959798 PMCID: PMC6518303 DOI: 10.3390/jcm8040462] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 04/02/2019] [Accepted: 04/03/2019] [Indexed: 02/07/2023] Open
Abstract
Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).
Collapse
|
793
|
García G, Colomer A, Naranjo V. First-Stage Prostate Cancer Identification on Histopathological Images: Hand-Driven versus Automatic Learning. ENTROPY (BASEL, SWITZERLAND) 2019; 21:E356. [PMID: 33267070 PMCID: PMC7514840 DOI: 10.3390/e21040356] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 03/25/2019] [Accepted: 03/29/2019] [Indexed: 12/14/2022]
Abstract
Analysis of histopathological image supposes the most reliable procedure to identify prostate cancer. Most studies try to develop computer aid-systems to face the Gleason grading problem. On the contrary, we delve into the discrimination between healthy and cancerous tissues in its earliest stage, only focusing on the information contained in the automatically segmented gland candidates. We propose a hand-driven learning approach, in which we perform an exhaustive hand-crafted feature extraction stage combining in a novel way descriptors of morphology, texture, fractals and contextual information of the candidates under study. Then, we carry out an in-depth statistical analysis to select the most relevant features that constitute the inputs to the optimised machine-learning classifiers. Additionally, we apply for the first time on prostate segmented glands, deep-learning algorithms modifying the popular VGG19 neural network. We fine-tuned the last convolutional block of the architecture to provide the model specific knowledge about the gland images. The hand-driven learning approach, using a nonlinear Support Vector Machine, reports a slight outperforming over the rest of experiments with a final multi-class accuracy of 0.876 ± 0.026 in the discrimination between false glands (artefacts), benign glands and Gleason grade 3 glands.
Collapse
Affiliation(s)
- Gabriel García
- Instituto de Investigación e Innovación en Bioingeniería (I3B), Universitat Politècnica de València (UPV), Camino de Vera s/n, 46008 Valencia, Spain
| | | | | |
Collapse
|
794
|
Zhang L, Yang G, Ye X. Automatic skin lesion segmentation by coupling deep fully convolutional networks and shallow network with textons. J Med Imaging (Bellingham) 2019; 6:024001. [PMID: 31001568 PMCID: PMC6462764 DOI: 10.1117/1.jmi.6.2.024001] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Accepted: 03/29/2019] [Indexed: 12/27/2022] Open
Abstract
Segmentation of skin lesions is an important step in computer-aided diagnosis of melanoma; it is also a very challenging task due to fuzzy lesion boundaries and heterogeneous lesion textures. We present a fully automatic method for skin lesion segmentation based on deep fully convolutional networks (FCNs). We investigate a shallow encoding network to model clinically valuable prior knowledge, in which spatial filters simulating simple cell receptive fields function in the primary visual cortex (V1) is considered. An effective fusing strategy using skip connections and convolution operators is then leveraged to couple prior knowledge encoded via shallow network with hierarchical data-driven features learned from the FCNs for detailed segmentation of the skin lesions. To our best knowledge, this is the first time the domain-specific hand craft features have been built into a deep network trained in an end-to-end manner for skin lesion segmentation. The method has been evaluated on both ISBI 2016 and ISBI 2017 skin lesion challenge datasets. We provide comparative evidence to demonstrate that our newly designed network can gain accuracy for lesion segmentation by coupling the prior knowledge encoded by the shallow network with the deep FCNs. Our method is robust without the need for data augmentation or comprehensive parameter tuning, and the experimental results show great promise of the method with effective model generalization compared to other state-of-the-art-methods.
Collapse
Affiliation(s)
- Lei Zhang
- University of Lincoln, Laboratory of Vision Engineering, School of Computer Science, Lincoln, United Kingdom
| | - Guang Yang
- Royal Brompton Hospital, Imperial College London and Cardiovascular Research Centre, National Heart and Lung Institute, London, United Kingdom
| | - Xujiong Ye
- University of Lincoln, Laboratory of Vision Engineering, School of Computer Science, Lincoln, United Kingdom
| |
Collapse
|
795
|
Zhou Z, Shin J, Feng R, Hurst RT, Kendall CB, Liang J. Integrating Active Learning and Transfer Learning for Carotid Intima-Media Thickness Video Interpretation. J Digit Imaging 2019; 32:290-299. [PMID: 30402668 PMCID: PMC6456630 DOI: 10.1007/s10278-018-0143-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
Cardiovascular disease (CVD) is the number one killer in the USA, yet it is largely preventable (World Health Organization 2011). To prevent CVD, carotid intima-media thickness (CIMT) imaging, a noninvasive ultrasonography method, has proven to be clinically valuable in identifying at-risk persons before adverse events. Researchers are developing systems to automate CIMT video interpretation based on deep learning, but such efforts are impeded by the lack of large annotated CIMT video datasets. CIMT video annotation is not only tedious, laborious, and time consuming, but also demanding of costly, specialty-oriented knowledge and skills, which are not easily accessible. To dramatically reduce the cost of CIMT video annotation, this paper makes three main contributions. Our first contribution is a new concept, called Annotation Unit (AU), which simplifies the entire CIMT video annotation process down to six simple mouse clicks. Our second contribution is a new algorithm, called AFT (active fine-tuning), which naturally integrates active learning and transfer learning (fine-tuning) into a single framework. AFT starts directly with a pre-trained convolutional neural network (CNN), focuses on selecting the most informative and representative AU s from the unannotated pool for annotation, and then fine-tunes the CNN by incorporating newly annotated AU s in each iteration to enhance the CNN's performance gradually. Our third contribution is a systematic evaluation, which shows that, in comparison with the state-of-the-art method (Tajbakhsh et al., IEEE Trans Med Imaging 35(5):1299-1312, 2016), our method can cut the annotation cost by >81% relative to their training from scratch and >50% relative to their random selection. This performance is attributed to the several advantages derived from the advanced active, continuous learning capability of our AFT method.
Collapse
Affiliation(s)
- Zongwei Zhou
- Arizona State University, 13212 E Shea Blvd, Scottsdale, AZ 85259 USA
| | - Jae Shin
- Arizona State University, 13212 E Shea Blvd, Scottsdale, AZ 85259 USA
| | - Ruibin Feng
- Arizona State University, 13212 E Shea Blvd, Scottsdale, AZ 85259 USA
| | - R. Todd Hurst
- Mayo Clinic, 13400 E Shea Blvd, Scottsdale, AZ 85259 USA
| | | | - Jianming Liang
- Arizona State University, 13212 E Shea Blvd, Scottsdale, AZ 85259 USA
| |
Collapse
|
796
|
Nida N, Irtaza A, Javed A, Yousaf MH, Mahmood MT. Melanoma lesion detection and segmentation using deep region based convolutional neural network and fuzzy C-means clustering. Int J Med Inform 2019; 124:37-48. [DOI: 10.1016/j.ijmedinf.2019.01.005] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2018] [Revised: 01/05/2019] [Accepted: 01/08/2019] [Indexed: 10/27/2022]
|
797
|
Wang K, Lu X, Zhou H, Gao Y, Zheng J, Tong M, Wu C, Liu C, Huang L, Jiang T, Meng F, Lu Y, Ai H, Xie XY, Yin LP, Liang P, Tian J, Zheng R. Deep learning Radiomics of shear wave elastography significantly improved diagnostic performance for assessing liver fibrosis in chronic hepatitis B: a prospective multicentre study. Gut 2019; 68:729-741. [PMID: 29730602 PMCID: PMC6580779 DOI: 10.1136/gutjnl-2018-316204] [Citation(s) in RCA: 293] [Impact Index Per Article: 58.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 04/11/2018] [Accepted: 04/12/2018] [Indexed: 12/12/2022]
Abstract
OBJECTIVE We aimed to evaluate the performance of the newly developed deep learning Radiomics of elastography (DLRE) for assessing liver fibrosis stages. DLRE adopts the radiomic strategy for quantitative analysis of the heterogeneity in two-dimensional shear wave elastography (2D-SWE) images. DESIGN A prospective multicentre study was conducted to assess its accuracy in patients with chronic hepatitis B, in comparison with 2D-SWE, aspartate transaminase-to-platelet ratio index and fibrosis index based on four factors, by using liver biopsy as the reference standard. Its accuracy and robustness were also investigated by applying different number of acquisitions and different training cohorts, respectively. Data of 654 potentially eligible patients were prospectively enrolled from 12 hospitals, and finally 398 patients with 1990 images were included. Analysis of receiver operating characteristic (ROC) curves was performed to calculate the optimal area under the ROC curve (AUC) for cirrhosis (F4), advanced fibrosis (≥F3) and significance fibrosis (≥F2). RESULTS AUCs of DLRE were 0.97 for F4 (95% CI 0.94 to 0.99), 0.98 for ≥F3 (95% CI 0.96 to 1.00) and 0.85 (95% CI 0.81 to 0.89) for ≥F2, which were significantly better than other methods except 2D-SWE in ≥F2. Its diagnostic accuracy improved as more images (especially ≥3 images) were acquired from each individual. No significant variation of the performance was found if different training cohorts were applied. CONCLUSION DLRE shows the best overall performance in predicting liver fibrosis stages compared with 2D-SWE and biomarkers. It is valuable and practical for the non-invasive accurate diagnosis of liver fibrosis stages in HBV-infected patients. TRIAL REGISTRATION NUMBER NCT02313649; Post-results.
Collapse
Affiliation(s)
- Kun Wang
- Guangdong Key Laboratory of Liver Disease Research, Department of Medical Ultrasound, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China,CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Xue Lu
- Guangdong Key Laboratory of Liver Disease Research, Department of Medical Ultrasound, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Hui Zhou
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China,Department of the Artificial Intelligence Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Yongyan Gao
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, China
| | - Jian Zheng
- Guangdong Key Laboratory of Liver Disease Research, Department of Medical Ultrasound, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China,Department of Medical Ultrasonics, Third Hospital of Longgang, Shenzhen, China
| | - Minghui Tong
- Functional Examination Department of Children’s Hospital, Lanzhou University Second Hospital, Lanzhou, China
| | - Changjun Wu
- Ultrasound Department, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Changzhu Liu
- Ultrasound Department, Guangzhou Eighth People’s Hospital, Guangzhou, China
| | - Liping Huang
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Tian’an Jiang
- Department of Ultrasonography, The First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, China
| | - Fankun Meng
- Function Diagnosis Center, Beijing Youan Hospital, Affiliated to Capital Medical University, Beijing, China
| | - Yongping Lu
- Ultrasound Department, The Second People’s Hospital of Yunnan Province, Kunming, China
| | - Hong Ai
- Ultrasound Department, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
| | - Xiao-Yan Xie
- Department of Medical Ultrasonics, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Li-ping Yin
- Department of Ultrasound, Jiangsu Province Hospital of TCM, Affiliated Hospital of Nanjing University of TCM, Nanjing, China
| | - Ping Liang
- Department of the Artificial Intelligence Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China,Department of the Artificial Intelligence Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Rongqin Zheng
- Guangdong Key Laboratory of Liver Disease Research, Department of Medical Ultrasound, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
798
|
Qadir HA, Balasingham I, Solhusvik J, Bergsland J, Aabakken L, Shin Y. Improving Automatic Polyp Detection Using CNN by Exploiting Temporal Dependency in Colonoscopy Video. IEEE J Biomed Health Inform 2019; 24:180-193. [PMID: 30946683 DOI: 10.1109/jbhi.2019.2907434] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Automatic polyp detection has been shown to be difficult due to various polyp-like structures in the colon and high interclass variations in polyp size, color, shape, and texture. An efficient method should not only have a high correct detection rate (high sensitivity) but also a low false detection rate (high precision and specificity). The state-of-the-art detection methods include convolutional neural networks (CNN). However, CNNs have shown to be vulnerable to small perturbations and noise; they sometimes miss the same polyp appearing in neighboring frames and produce a high number of false positives. We aim to tackle this problem and improve the overall performance of the CNN-based object detectors for polyp detection in colonoscopy videos. Our method consists of two stages: a region of interest (RoI) proposal by CNN-based object detector networks and a false positive (FP) reduction unit. The FP reduction unit exploits the temporal dependencies among image frames in video by integrating the bidirectional temporal information obtained by RoIs in a set of consecutive frames. This information is used to make the final decision. The experimental results show that the bidirectional temporal information has been helpful in estimating polyp positions and accurately predict the FPs. This provides an overall performance improvement in terms of sensitivity, precision, and specificity compared to conventional false positive learning method, and thus achieves the state-of-the-art results on the CVC-ClinicVideoDB video data set.
Collapse
|
799
|
Li C, Wang X, Liu W, Latecki LJ, Wang B, Huang J. Weakly supervised mitosis detection in breast histopathology images using concentric loss. Med Image Anal 2019; 53:165-178. [DOI: 10.1016/j.media.2019.01.013] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 01/19/2019] [Accepted: 01/21/2019] [Indexed: 11/30/2022]
|
800
|
Pranata YD, Wang KC, Wang JC, Idram I, Lai JY, Liu JW, Hsieh IH. Deep learning and SURF for automated classification and detection of calcaneus fractures in CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 171:27-37. [PMID: 30902248 DOI: 10.1016/j.cmpb.2019.02.006] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Revised: 01/29/2019] [Accepted: 02/11/2019] [Indexed: 05/10/2023]
Abstract
BACKGROUND AND OBJECTIVES The calcaneus is the most fracture-prone tarsal bone and injuries to the surrounding tissue are some of the most difficult to treat. Currently there is a lack of consensus on treatment or interpretation of computed tomography (CT) images for calcaneus fractures. This study proposes a novel computer-assisted method for automated classification and detection of fracture locations in calcaneus CT images using a deep learning algorithm. METHODS Two types of Convolutional Neural Network (CNN) architectures with different network depths, a Residual network (ResNet) and a Visual geometry group (VGG), were evaluated and compared for the classification performance of CT scans into fracture and non-fracture categories based on coronal, sagittal, and transverse views. The bone fracture detection algorithm incorporated fracture area matching using the speeded-up robust features (SURF) method, Canny edge detection, and contour tracing. RESULTS Results showed that ResNet was comparable in accuracy (98%) to the VGG network for bone fracture classification but achieved better performance for involving a deeper neural network architecture. ResNet classification results were used as the input for detecting the location and type of bone fracture using SURF algorithm. CONCLUSIONS Results from real patient fracture data sets demonstrate the feasibility using deep CNN and SURF for computer-aided classification and detection of the location of calcaneus fractures in CT images.
Collapse
Affiliation(s)
- Yoga Dwi Pranata
- Department of Computer Science and Information Engineering, National Central University, Jhongli County, Taoyuan City, Taiwan
| | - Kuan-Chung Wang
- Department of Computer Science and Information Engineering, National Central University, Jhongli County, Taoyuan City, Taiwan
| | - Jia-Ching Wang
- Department of Computer Science and Information Engineering, National Central University, Jhongli County, Taoyuan City, Taiwan.
| | - Irwansyah Idram
- Department of Mechanical Engineering, National Central University, Jhongli County, Taoyuan City, Taiwan
| | - Jiing-Yih Lai
- Department of Mechanical Engineering, National Central University, Jhongli County, Taoyuan City, Taiwan
| | - Jia-Wei Liu
- Institute of Cognitive Neuroscience, National Central University, Jhongli County, Taoyuan City, Taiwan
| | - I-Hui Hsieh
- Institute of Cognitive Neuroscience, National Central University, Jhongli County, Taoyuan City, Taiwan.
| |
Collapse
|