901
|
Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018; 98:126-146. [PMID: 29787940 DOI: 10.1016/j.compbiomed.2018.05.018] [Citation(s) in RCA: 149] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 05/15/2018] [Accepted: 05/15/2018] [Indexed: 12/17/2022]
Abstract
More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications.
Collapse
Affiliation(s)
- Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France.
| | | | | | | |
Collapse
|
902
|
Liebovitz DM, Fahrenbach J. Rebuttal From Drs Liebovitz and Fahrenbach. Chest 2018; 153:1099-1100. [DOI: 10.1016/j.chest.2018.01.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 01/17/2018] [Indexed: 10/18/2022] Open
|
903
|
van der Sommen F, Curvers WL, Nagengast WB. Novel Developments in Endoscopic Mucosal Imaging. Gastroenterology 2018; 154:1876-1886. [PMID: 29462601 DOI: 10.1053/j.gastro.2018.01.070] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 12/28/2017] [Accepted: 01/06/2018] [Indexed: 12/20/2022]
Abstract
Endoscopic techniques such as high-definition and optical-chromoendoscopy have had enormous impact on endoscopy practice. Since these techniques allow assessment of most subtle morphological mucosal abnormalities, further improvements in endoscopic practice lay in increasing the detection efficacy of endoscopists. Several new developments could assist in this. First, web based training tools could improve the skills of the endoscopist for enhancing the detection and classification of lesions. Secondly, incorporation of computer aided detection will be the next step to raise endoscopic quality of the captured data. These systems will aid the endoscopist in interpreting the increasing amount of visual information in endoscopic images providing real-time objective second reading. In addition, developments in the field of molecular imaging open opportunities to add functional imaging data, visualizing biological parameters, of the gastrointestinal tract to white-light morphology imaging. For the successful implementation of abovementioned techniques, a true multi-disciplinary approach is of vital importance.
Collapse
Affiliation(s)
- Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Wouter L Curvers
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, The Netherlands
| | - Wouter B Nagengast
- Department of Gastroenterology and Hepatology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
904
|
Chudzik P, Majumdar S, Calivá F, Al-Diri B, Hunter A. Microaneurysm detection using fully convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 158:185-192. [PMID: 29544784 DOI: 10.1016/j.cmpb.2018.02.016] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Revised: 01/18/2018] [Accepted: 02/22/2018] [Indexed: 05/11/2023]
Abstract
BACKROUND AND OBJECTIVES Diabetic retinopathy is a microvascular complication of diabetes that can lead to sight loss if treated not early enough. Microaneurysms are the earliest clinical signs of diabetic retinopathy. This paper presents an automatic method for detecting microaneurysms in fundus photographies. METHODS A novel patch-based fully convolutional neural network with batch normalization layers and Dice loss function is proposed. Compared to other methods that require up to five processing stages, it requires only three. Furthermore, to the best of the authors' knowledge, this is the first paper that shows how to successfully transfer knowledge between datasets in the microaneurysm detection domain. RESULTS The proposed method was evaluated using three publicly available and widely used datasets: E-Ophtha, DIARETDB1, and ROC. It achieved better results than state-of-the-art methods using the FROC metric. The proposed algorithm accomplished highest sensitivities for low false positive rates, which is particularly important for screening purposes. CONCLUSIONS Performance, simplicity, and robustness of the proposed method demonstrates its suitability for diabetic retinopathy screening applications.
Collapse
Affiliation(s)
- Piotr Chudzik
- School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK.
| | - Somshubra Majumdar
- Department of Computer Science, University of Illinois, Chicago, IL 60607, USA
| | - Francesco Calivá
- School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK
| | - Bashir Al-Diri
- School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK
| | - Andrew Hunter
- School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK
| |
Collapse
|
905
|
He JY, Wu X, Jiang YG, Peng Q, Jain R. Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2379-2392. [PMID: 29470172 DOI: 10.1109/tip.2018.2801119] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
As one of the most common human helminths, hookworm is a leading cause of maternal and child morbidity, which seriously threatens human health. Recently, wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, deep convolutional neural network (CNN) has demonstrated impressive performance in various image and video analysis tasks. In this paper, a novel deep hookworm detection framework is proposed for WCE images, which simultaneously models visual appearances and tubular patterns of hookworms. This is the first deep learning framework specifically designed for hookworm detection in WCE images. Two CNN networks, namely edge extraction network and hookworm classification network, are seamlessly integrated in the proposed framework, which avoid the edge feature caching and speed up the classification. Two edge pooling layers are introduced to integrate the tubular regions induced from edge extraction network and the feature maps from hookworm classification network, leading to enhanced feature maps emphasizing the tubular regions. Experiments have been conducted on one of the largest WCE datasets with WCE images, which demonstrate the effectiveness of the proposed hookworm detection framework. It significantly outperforms the state-of-the-art approaches. The high sensitivity and accuracy of the proposed method in detecting hookworms shows its potential for clinical application.
Collapse
|
906
|
Deep convolutional neural network for the automated diagnosis of congestive heart failure using ECG signals. APPL INTELL 2018. [DOI: 10.1007/s10489-018-1179-1] [Citation(s) in RCA: 124] [Impact Index Per Article: 20.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
907
|
An improved deep learning approach for detection of thyroid papillary cancer in ultrasound images. Sci Rep 2018; 8:6600. [PMID: 29700427 PMCID: PMC5920067 DOI: 10.1038/s41598-018-25005-7] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Accepted: 04/11/2018] [Indexed: 01/07/2023] Open
Abstract
Unlike daily routine images, ultrasound images are usually monochrome and low-resolution. In ultrasound images, the cancer regions are usually blurred, vague margin and irregular in shape. Moreover, the features of cancer region are very similar to normal or benign tissues. Therefore, training ultrasound images with original Convolutional Neural Network (CNN) directly is not satisfactory. In our study, inspired by state-of-the-art object detection network Faster R-CNN, we develop a detector which is more suitable for thyroid papillary carcinoma detection in ultrasound images. In order to improve the accuracy of the detection, we add a spatial constrained layer to CNN so that the detector can extract the features of surrounding region in which the cancer regions are residing. In addition, by concatenating the shallow and deep layers of the CNN, the detector can detect blurrier or smaller cancer regions. The experiments demonstrate that the potential of this new methodology can reduce the workload for pathologists and increase the objectivity of diagnoses. We find that 93:5% of papillary thyroid carcinoma regions could be detected automatically while 81:5% of benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention.
Collapse
|
908
|
Yu X, Zheng H, Liu C, Huang Y, Ding X. Classify epithelium-stroma in histopathological images based on deep transferable network. J Microsc 2018; 271:164-173. [PMID: 29676794 DOI: 10.1111/jmi.12705] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Revised: 03/12/2018] [Accepted: 03/28/2018] [Indexed: 11/28/2022]
Abstract
Recently, the deep learning methods have received more attention in histopathological image analysis. However, the traditional deep learning methods assume that training data and test data have the same distributions, which causes certain limitations in real-world histopathological applications. However, it is costly to recollect a large amount of labeled histology data to train a new neural network for each specified image acquisition procedure even for similar tasks. In this paper, an unsupervised domain adaptation is introduced into a typical deep convolutional neural network (CNN) model to mitigate the repeating of the labels. The unsupervised domain adaptation is implemented by adding two regularisation terms, namely the feature-based adaptation and entropy minimisation, to the object function of a widely used CNN model called the AlexNet. Three independent public epithelium-stroma datasets were used to verify the proposed method. The experimental results have demonstrated that in the epithelium-stroma classification, the proposed method can achieve better performance than the commonly used deep learning methods and some existing deep domain adaptation methods. Therefore, the proposed method can be considered as a better option for the real-world applications of histopathological image analysis because there is no requirement for recollection of large-scale labeled data for every specified domain.
Collapse
Affiliation(s)
- X Yu
- Fujian key Laboratory of Sensing and Computing for Smart City, Xiamen Unviersity, Xiamen, Fujian, China
- School of Information Science and Engineering, Xiamen University, Xiamen, Fujian, China
| | - H Zheng
- Fujian key Laboratory of Sensing and Computing for Smart City, Xiamen Unviersity, Xiamen, Fujian, China
- School of Information Science and Engineering, Xiamen University, Xiamen, Fujian, China
| | - C Liu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, U.S.A
| | - Y Huang
- Fujian key Laboratory of Sensing and Computing for Smart City, Xiamen Unviersity, Xiamen, Fujian, China
- School of Information Science and Engineering, Xiamen University, Xiamen, Fujian, China
| | - X Ding
- Fujian key Laboratory of Sensing and Computing for Smart City, Xiamen Unviersity, Xiamen, Fujian, China
- School of Information Science and Engineering, Xiamen University, Xiamen, Fujian, China
| |
Collapse
|
909
|
Interian Y, Rideout V, Kearney VP, Gennatas E, Morin O, Cheung J, Solberg T, Valdes G. Deep nets vs expert designed features in medical physics: An IMRT QA case study. Med Phys 2018; 45:2672-2680. [PMID: 29603278 DOI: 10.1002/mp.12890] [Citation(s) in RCA: 83] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Revised: 01/09/2018] [Accepted: 01/09/2018] [Indexed: 11/12/2022] Open
Abstract
PURPOSE The purpose of this study was to compare the performance of Deep Neural Networks against a technique designed by domain experts in the prediction of gamma passing rates for Intensity Modulated Radiation Therapy Quality Assurance (IMRT QA). METHOD A total of 498 IMRT plans across all treatment sites were planned in Eclipse version 11 and delivered using a dynamic sliding window technique on Clinac iX or TrueBeam Linacs. Measurements were performed using a commercial 2D diode array, and passing rates for 3%/3 mm local dose/distance-to-agreement (DTA) were recorded. Separately, fluence maps calculated for each plan were used as inputs to a convolution neural network (CNN). The CNNs were trained to predict IMRT QA gamma passing rates using TensorFlow and Keras. A set of model architectures, inspired by the convolutional blocks of the VGG-16 ImageNet model, were constructed and implemented. Synthetic data, created by rotating and translating the fluence maps during training, was created to boost the performance of the CNNs. Dropout, batch normalization, and data augmentation were utilized to help train the model. The performance of the CNNs was compared to a generalized Poisson regression model, previously developed for this application, which used 78 expert designed features. RESULTS Deep Neural Networks without domain knowledge achieved comparable performance to a baseline system designed by domain experts in the prediction of 3%/3 mm Local gamma passing rates. An ensemble of neural nets resulted in a mean absolute error (MAE) of 0.70 ± 0.05 and the domain expert model resulted in a 0.74 ± 0.06. CONCLUSIONS Convolutional neural networks (CNNs) with transfer learning can predict IMRT QA passing rates by automatically designing features from the fluence maps without human expert supervision. Predictions from CNNs are comparable to a system carefully designed by physicist experts.
Collapse
Affiliation(s)
- Yannet Interian
- MS in Analytics Program, University of San Francisco, San Francisco, CA, USA
| | - Vincent Rideout
- MS in Analytics Program, University of San Francisco, San Francisco, CA, USA
| | - Vasant P Kearney
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Efstathios Gennatas
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Olivier Morin
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Joey Cheung
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Timothy Solberg
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Gilmer Valdes
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
910
|
Feng M, Valdes G, Dixit N, Solberg TD. Machine Learning in Radiation Oncology: Opportunities, Requirements, and Needs. Front Oncol 2018; 8:110. [PMID: 29719815 PMCID: PMC5913324 DOI: 10.3389/fonc.2018.00110] [Citation(s) in RCA: 64] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 03/29/2018] [Indexed: 12/14/2022] Open
Abstract
Machine learning (ML) has the potential to revolutionize the field of radiation oncology, but there is much work to be done. In this article, we approach the radiotherapy process from a workflow perspective, identifying specific areas where a data-centric approach using ML could improve the quality and efficiency of patient care. We highlight areas where ML has already been used, and identify areas where we should invest additional resources. We believe that this article can serve as a guide for both clinicians and researchers to start discussing issues that must be addressed in a timely manner.
Collapse
Affiliation(s)
- Mary Feng
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, United States
| | - Gilmer Valdes
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, United States
| | - Nayha Dixit
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, United States
| | - Timothy D Solberg
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, United States
| |
Collapse
|
911
|
Fully convolutional networks (FCNs)-based segmentation method for colorectal tumors on T2-weighted magnetic resonance images. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2018; 41:393-401. [DOI: 10.1007/s13246-018-0636-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Accepted: 04/04/2018] [Indexed: 12/14/2022]
|
912
|
Forsberg D, Sjöblom E, Sunshine JL. Detection and Labeling of Vertebrae in MR Images Using Deep Learning with Clinical Annotations as Training Data. J Digit Imaging 2018; 30:406-412. [PMID: 28083827 DOI: 10.1007/s10278-017-9945-x] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
The purpose of this study was to investigate the potential of using clinically provided spine label annotations stored in a single institution image archive as training data for deep learning-based vertebral detection and labeling pipelines. Lumbar and cervical magnetic resonance imaging cases with annotated spine labels were identified and exported from an image archive. Two separate pipelines were configured and trained for lumbar and cervical cases respectively, using the same setup with convolutional neural networks for detection and parts-based graphical models to label the vertebrae. The detection sensitivity, precision and accuracy rates ranged between 99.1-99.8, 99.6-100, and 98.8-99.8% respectively, the average localization error ranges were 1.18-1.24 and 2.38-2.60 mm for cervical and lumbar cases respectively, and with a labeling accuracy of 96.0-97.0%. Failed labeling results typically involved failed S1 detections or missed vertebrae that were not fully visible on the image. These results show that clinically annotated image data from one image archive is sufficient to train a deep learning-based pipeline for accurate detection and labeling of MR images depicting the spine. Further, these results support using deep learning to assist radiologists in their work by providing highly accurate labels that only require rapid confirmation.
Collapse
Affiliation(s)
- Daniel Forsberg
- Sectra, Teknikringen 20, 583 30, Linköping, SE, Sweden. .,Department of Radiology, Case Western Reserve University and University Hospitals Cleveland Medical Center, 11100 Euclid Avenue, Cleveland, OH, 44106, USA.
| | - Erik Sjöblom
- Sectra, Teknikringen 20, 583 30, Linköping, SE, Sweden
| | - Jeffrey L Sunshine
- Department of Radiology, Case Western Reserve University and University Hospitals Cleveland Medical Center, 11100 Euclid Avenue, Cleveland, OH, 44106, USA
| |
Collapse
|
913
|
Cheng PM, Malhi HS. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images. J Digit Imaging 2018; 30:234-243. [PMID: 27896451 DOI: 10.1007/s10278-016-9929-2] [Citation(s) in RCA: 96] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p < 0.001). The results demonstrate that transfer learning with convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.
Collapse
Affiliation(s)
- Phillip M Cheng
- Department of Radiology, Keck School of Medicine of USC, Los Angeles, CA, USA.
- USC Norris Cancer Center and Hospital, 1441 Eastlake Avenue, Suite 2315B, Los Angeles, CA, 90033-0377, USA.
| | - Harshawn S Malhi
- Department of Radiology, Keck School of Medicine of USC, Los Angeles, CA, USA
| |
Collapse
|
914
|
Tian Z, Liu L, Zhang Z, Fei B. PSNet: prostate segmentation on MRI based on a convolutional neural network. J Med Imaging (Bellingham) 2018; 5:021208. [PMID: 29376105 PMCID: PMC5771127 DOI: 10.1117/1.jmi.5.2.021208] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 12/20/2017] [Indexed: 01/09/2023] Open
Abstract
Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Xi'an Jiaotong University, School of Software Engineering, Xi'an, China
- Emory University School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Lizhi Liu
- Emory University School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Zhenfeng Zhang
- The Second Hospital of Guangzhou Medical University, Department of Radiology, Guangzhou, China
| | - Baowei Fei
- Emory University School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
- Georgia Institute of Technology and Emory University, Wallace H. Coulter Department of Biomedical Engineering, Atlanta, Georgia, United States
- Winship Cancer Institute of Emory University, Atlanta, Georgia, United States
- Emory University, Department of Mathematics and Computer Science, Atlanta, Georgia, United States
| |
Collapse
|
915
|
Choi H. Deep Learning in Nuclear Medicine and Molecular Imaging: Current Perspectives and Future Directions. Nucl Med Mol Imaging 2018; 52:109-118. [PMID: 29662559 PMCID: PMC5897260 DOI: 10.1007/s13139-017-0504-7] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Revised: 10/23/2017] [Accepted: 10/30/2017] [Indexed: 12/21/2022] Open
Abstract
Recent advances in deep learning have impacted various scientific and industrial fields. Due to the rapid application of deep learning in biomedical data, molecular imaging has also started to adopt this technique. In this regard, it is expected that deep learning will potentially affect the roles of molecular imaging experts as well as clinical decision making. This review firstly offers a basic overview of deep learning particularly for image data analysis to give knowledge to nuclear medicine physicians and researchers. Because of the unique characteristics and distinctive aims of various types of molecular imaging, deep learning applications can be different from other fields. In this context, the review deals with current perspectives of deep learning in molecular imaging particularly in terms of development of biomarkers. Finally, future challenges of deep learning application for molecular imaging and future roles of experts in molecular imaging will be discussed.
Collapse
Affiliation(s)
- Hongyoon Choi
- Cheonan Public Health Center, 234-1 Buldang-Dong, Seobuk-Gu, Cheonan, Republic of Korea
| |
Collapse
|
916
|
Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets. Comput Biol Med 2018; 95:217-233. [DOI: 10.1016/j.compbiomed.2018.02.008] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Revised: 02/11/2018] [Accepted: 02/12/2018] [Indexed: 11/22/2022]
|
917
|
Hermessi H, Mourali O, Zagrouba E. Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3441-1] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
918
|
Automatic Semantic Segmentation of Brain Gliomas from MRI Images Using a Deep Cascaded Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:4940593. [PMID: 29755716 PMCID: PMC5884212 DOI: 10.1155/2018/4940593] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Accepted: 02/11/2018] [Indexed: 11/17/2022]
Abstract
Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.
Collapse
|
919
|
Valdes G, Interian Y. Comment on ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’. ACTA ACUST UNITED AC 2018; 63:068001. [DOI: 10.1088/1361-6560/aaae23] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
920
|
Brandao P, Zisimopoulos O, Mazomenos E, Ciuti G, Bernal J, Visentini-Scarzanella M, Menciassi A, Dario P, Koulaouzidis A, Arezzo A, Hawkes DJ, Stoyanov D. Towards a Computed-Aided Diagnosis System in Colonoscopy: Automatic Polyp Segmentation Using Convolution Neural Networks. ACTA ACUST UNITED AC 2018. [DOI: 10.1142/s2424905x18400020] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Early diagnosis is essential for the successful treatment of bowel cancers including colorectal cancer (CRC), and capsule endoscopic imaging with robotic actuation can be a valuable diagnostic tool when combined with automated image analysis. We present a deep learning rooted detection and segmentation framework for recognizing lesions in colonoscopy and capsule endoscopy images. We restructure established convolution architectures, such as VGG and ResNets, by converting them into fully-connected convolution networks (FCNs), fine-tune them and study their capabilities for polyp segmentation and detection. We additionally use shape-from-shading (SfS) to recover depth and provide a richer representation of the tissue’s structure in colonoscopy images. Depth is incorporated into our network models as an additional input channel to the RGB information and we demonstrate that the resulting network yields improved performance. Our networks are tested on publicly available datasets and the most accurate segmentation model achieved a mean segmentation interception over union (IU) of 47.78% and 56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp detection, the top performing models we propose surpass the current state-of-the-art with detection recalls superior to 90% for all datasets tested. To our knowledge, we present the first work to use FCNs for polyp segmentation in addition to proposing a novel combination of SfS and RGB that boosts performance.
Collapse
Affiliation(s)
- Patrick Brandao
- Centre for Medical Image Computing, University College London, London, UK
| | | | | | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Jorge Bernal
- Department of Computer Science Universitat Autnoma de Barcelona, Barcelona, Spain
| | | | | | - Paolo Dario
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | | | - Alberto Arezzo
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - David J Hawkes
- Centre for Medical Image Computing, University College London, London, UK
| | - Danail Stoyanov
- Centre for Medical Image Computing, University College London, London, UK
| |
Collapse
|
921
|
Abstract
Background/Purpose Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. Methods A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist’s and non-expert’s evaluation. Results The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert’s evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden’s index like 0.6795, 0.6073, which were similar score with the expert. Conclusion Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.
Collapse
|
922
|
|
923
|
Zhensong Wang, Lifang Wei, Li Wang, Yaozong Gao, Wufan Chen, Dinggang Shen. Hierarchical Vertex Regression-Based Segmentation of Head and Neck CT Images for Radiotherapy Planning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:923-937. [PMID: 29757737 PMCID: PMC5954838 DOI: 10.1109/tip.2017.2768621] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Segmenting organs at risk from head and neck CT images is a prerequisite for the treatment of head and neck cancer using intensity modulated radiotherapy. However, accurate and automatic segmentation of organs at risk is a challenging task due to the low contrast of soft tissue and image artifact in CT images. Shape priors have been proved effective in addressing this challenging task. However, conventional methods incorporating shape priors often suffer from sensitivity to shape initialization and also shape variations across individuals. In this paper, we propose a novel approach to incorporate shape priors into a hierarchical learning-based model. The contributions of our proposed approach are as follows: 1) a novel mechanism for critical vertices identification is proposed to identify vertices with distinctive appearances and strong consistency across different subjects; 2) a new strategy of hierarchical vertex regression is also used to gradually locate more vertices with the guidance of previously located vertices; and 3) an innovative framework of joint shape and appearance learning is further developed to capture salient shape and appearance features simultaneously. Using these innovative strategies, our proposed approach can essentially overcome drawbacks of the conventional shape-based segmentation methods. Experimental results show that our approach can achieve much better results than state-of-the-art methods.
Collapse
|
924
|
Abstract
This paper introduces the "encoded local projections" (ELP) as a new dense-sampling image descriptor for search and classification problems. The gradient changes of multiple projections in local windows of gray-level images are encoded to build a histogram that captures spatial projection patterns. Using projections is a conventional technique in both medical imaging and computer vision. Furthermore, powerful dense-sampling methods, such as local binary patterns and the histogram of oriented gradients, are widely used for image classification and recognition. Inspired by many achievements of such existing descriptors, we explore the design of a new class of histogram-based descriptors with particular applications in medical imaging. We experiment with three public datasets (IRMA, Kimia Path24, and CT Emphysema) to comparatively evaluate the performance of ELP histograms. In light of the tremendous success of deep architectures, we also compare the results with deep features generated by pretrained networks. The results are quite encouraging as the ELP descriptor can surpass both conventional and deep descriptors in performance in several experimental settings.
Collapse
|
925
|
Maruyama T, Hayashi N, Sato Y, Hyuga S, Wakayama Y, Watanabe H, Ogura A, Ogura T. Comparison of medical image classification accuracy among three machine learning methods. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2018; 26:885-893. [PMID: 30223423 DOI: 10.3233/xst-18386] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
BACKGROUND Low-quality medical images may influence the accuracy of the machine learning process. OBJECTIVE This study was undertaken to compare accuracy of medical image classification among machine learning methods, as classification is a basic aspect of clinical image inspection. METHODS Three types of machine learning methods were used, which include Support Vector Machine (SVM), Artificial Neural Network (ANN), and Convolution Neural Network (CNN). To investigate changes in accuracy related to image quality, we constructed a single dataset using two different file formats of DICOM (Digital Imaging and Communications in Medicine) and JPEG (Joint Photographic Experts Group). RESULTS The JPEG format contains less color information and data capacity than the DICOM format. CNN classification was accurate for both datasets, whereas SVM and ANN accuracy decreased with the loss of data from DICOM to JPEG formats. CONCLUSIONS CNN is more accurate than conventional machine learning methods that utilize the manual feature extraction.
Collapse
Affiliation(s)
- Tomoko Maruyama
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Norio Hayashi
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Yusuke Sato
- Graduate School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Shingo Hyuga
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Yuta Wakayama
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Haruyuki Watanabe
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Akio Ogura
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Toshihiro Ogura
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| |
Collapse
|
926
|
Haarburger C, Langenberg P, Truhn D, Schneider H, Thüring J, Schrading S, Kuhl CK, Merhof D. Transfer Learning for Breast Cancer Malignancy Classification based on Dynamic Contrast-Enhanced MR Images. BILDVERARBEITUNG FÜR DIE MEDIZIN 2018 2018. [DOI: 10.1007/978-3-662-56537-7_61] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
927
|
Mahapatra D, Bozorgtabar B, Thiran JP, Reyes M. Efficient Active Learning for Image Classification and Segmentation Using a Sample Selection and Conditional Generative Adversarial Network. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00934-2_65] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
928
|
Guo S, Yang Z. Multi-Channel-ResNet: An integration framework towards skin lesion analysis. INFORMATICS IN MEDICINE UNLOCKED 2018. [DOI: 10.1016/j.imu.2018.06.006] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
|
929
|
A Comparative Study of Modern Machine Learning Approaches for Focal Lesion Detection and Classification in Medical Images: BoVW, CNN and MTANN. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2018. [DOI: 10.1007/978-3-319-68843-5_2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
930
|
Involvement of Machine Learning for Breast Cancer Image Classification: A Survey. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2017; 2017:3781951. [PMID: 29463985 PMCID: PMC5804413 DOI: 10.1155/2017/3781951] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Accepted: 10/26/2017] [Indexed: 11/17/2022]
Abstract
Breast cancer is one of the largest causes of women's death in the world today. Advance engineering of natural image classification techniques and Artificial Intelligence methods has largely been used for the breast-image classification task. The involvement of digital image classification allows the doctor and the physicians a second opinion, and it saves the doctors' and physicians' time. Despite the various publications on breast image classification, very few review papers are available which provide a detailed description of breast cancer image classification techniques, feature extraction and selection procedures, classification measuring parameterizations, and image classification findings. We have put a special emphasis on the Convolutional Neural Network (CNN) method for breast image classification. Along with the CNN method we have also described the involvement of the conventional Neural Network (NN), Logic Based classifiers such as the Random Forest (RF) algorithm, Support Vector Machines (SVM), Bayesian methods, and a few of the semisupervised and unsupervised methods which have been used for breast image classification.
Collapse
|
931
|
Leveraging uncertainty information from deep neural networks for disease detection. Sci Rep 2017; 7:17816. [PMID: 29259224 PMCID: PMC5736701 DOI: 10.1038/s41598-017-17876-z] [Citation(s) in RCA: 131] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 12/01/2017] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has revolutionized the field of computer vision and image processing. In medical imaging, algorithmic solutions based on DL have been shown to achieve high performance on tasks that previously required medical experts. However, DL-based solutions for disease detection have been proposed without methods to quantify and control their uncertainty in a decision. In contrast, a physician knows whether she is uncertain about a case and will consult more experienced colleagues if needed. Here we evaluate drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy (DR) from fundus images and show that it captures uncertainty better than straightforward alternatives. Furthermore, we show that uncertainty informed decision referral can improve diagnostic performance. Experiments across different networks, tasks and datasets show robust generalization. Depending on network capacity and task/dataset difficulty, we surpass 85% sensitivity and 80% specificity as recommended by the NHS when referring 0−20% of the most uncertain decisions for further inspection. We analyse causes of uncertainty by relating intuitions from 2D visualizations to the high-dimensional image space. While uncertainty is sensitive to clinically relevant cases, sensitivity to unfamiliar data samples is task dependent, but can be rendered more robust.
Collapse
|
932
|
Wang H, Zhou Z, Li Y, Chen Z, Lu P, Wang W, Liu W, Yu L. Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images. EJNMMI Res 2017; 7:11. [PMID: 28130689 PMCID: PMC5272853 DOI: 10.1186/s13550-017-0260-9] [Citation(s) in RCA: 143] [Impact Index Per Article: 20.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Accepted: 01/19/2017] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer (NSCLC) from 18F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute. RESULTS For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN's sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors. CONCLUSIONS The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.
Collapse
Affiliation(s)
- Hongkai Wang
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, No. 2 Linggong Street, Ganjingzi District, Dalian, Liaoning, 116024, China
| | - Zongwei Zhou
- Department of Biomedical Informatics and the College of Health Solutions, Arizona State University, 13212 East Shea Boulevard, Scottsdale, AZ, 85259, USA
| | - Yingci Li
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China
| | - Zhonghua Chen
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, No. 2 Linggong Street, Ganjingzi District, Dalian, Liaoning, 116024, China
| | - Peiou Lu
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China
| | - Wenzhi Wang
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China
| | - Wanyu Liu
- HIT-INSA Sino French Research Centre for Biomedical Imaging, Harbin Institute of Technology, Harbin, Heilongjiang, 150001, China
| | - Lijuan Yu
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China.
| |
Collapse
|
933
|
Co-trained convolutional neural networks for automated detection of prostate cancer in multi-parametric MRI. Med Image Anal 2017; 42:212-227. [DOI: 10.1016/j.media.2017.08.006] [Citation(s) in RCA: 84] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 08/14/2017] [Accepted: 08/16/2017] [Indexed: 11/20/2022]
|
934
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4356] [Impact Index Per Article: 622.3] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
935
|
Olczak J, Fahlberg N, Maki A, Razavian AS, Jilert A, Stark A, Sköldenberg O, Gordon M. Artificial intelligence for analyzing orthopedic trauma radiographs. Acta Orthop 2017; 88:581-586. [PMID: 28681679 PMCID: PMC5694800 DOI: 10.1080/17453674.2017.1344459] [Citation(s) in RCA: 240] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Accepted: 06/06/2017] [Indexed: 02/06/2023] Open
Abstract
Background and purpose - Recent advances in artificial intelligence (deep learning) have shown remarkable performance in classifying non-medical images, and the technology is believed to be the next technological revolution. So far it has never been applied in an orthopedic setting, and in this study we sought to determine the feasibility of using deep learning for skeletal radiographs. Methods - We extracted 256,000 wrist, hand, and ankle radiographs from Danderyd's Hospital and identified 4 classes: fracture, laterality, body part, and exam view. We then selected 5 openly available deep learning networks that were adapted for these images. The most accurate network was benchmarked against a gold standard for fractures. We furthermore compared the network's performance with 2 senior orthopedic surgeons who reviewed images at the same resolution as the network. Results - All networks exhibited an accuracy of at least 90% when identifying laterality, body part, and exam view. The final accuracy for fractures was estimated at 83% for the best performing network. The network performed similarly to senior orthopedic surgeons when presented with images at the same resolution as the network. The 2 reviewer Cohen's kappa under these conditions was 0.76. Interpretation - This study supports the use for orthopedic radiographs of artificial intelligence, which can perform at a human level. While current implementation lacks important features that surgeons require, e.g. risk of dislocation, classifications, measurements, and combining multiple exam views, these problems have technical solutions that are waiting to be implemented for orthopedics.
Collapse
Affiliation(s)
- Jakub Olczak
- Department of Clinical Sciences, Karolinska Institutet, Danderyd Hospital
| | | | - Atsuto Maki
- Department of Robotics, Perception and Learning (RPL), School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Ali Sharif Razavian
- Department of Clinical Sciences, Karolinska Institutet, Danderyd Hospital
- Department of Robotics, Perception and Learning (RPL), School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anthony Jilert
- Radiology clinic, Danderyd Hospital, Danderyd Hospital AB
| | - André Stark
- Department of Clinical Sciences, Karolinska Institutet, Danderyd Hospital
| | - Olof Sköldenberg
- Department of Clinical Sciences, Karolinska Institutet, Danderyd Hospital
| | - Max Gordon
- Department of Clinical Sciences, Karolinska Institutet, Danderyd Hospital
| |
Collapse
|
936
|
Jiang J, Liu X, Zhang K, Long E, Wang L, Li W, Liu L, Wang S, Zhu M, Cui J, Liu Z, Lin Z, Li X, Chen J, Cao Q, Li J, Wu X, Wang D, Wang J, Lin H. Automatic diagnosis of imbalanced ophthalmic images using a cost-sensitive deep convolutional neural network. Biomed Eng Online 2017; 16:132. [PMID: 29157240 PMCID: PMC5697161 DOI: 10.1186/s12938-017-0420-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 11/07/2017] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Ocular images play an essential role in ophthalmological diagnoses. Having an imbalanced dataset is an inevitable issue in automated ocular diseases diagnosis; the scarcity of positive samples always tends to result in the misdiagnosis of severe patients during the classification task. Exploring an effective computer-aided diagnostic method to deal with imbalanced ophthalmological dataset is crucial. METHODS In this paper, we develop an effective cost-sensitive deep residual convolutional neural network (CS-ResCNN) classifier to diagnose ophthalmic diseases using retro-illumination images. First, the regions of interest (crystalline lens) are automatically identified via twice-applied Canny detection and Hough transformation. Then, the localized zones are fed into the CS-ResCNN to extract high-level features for subsequent use in automatic diagnosis. Second, the impacts of cost factors on the CS-ResCNN are further analyzed using a grid-search procedure to verify that our proposed system is robust and efficient. RESULTS Qualitative analyses and quantitative experimental results demonstrate that our proposed method outperforms other conventional approaches and offers exceptional mean accuracy (92.24%), specificity (93.19%), sensitivity (89.66%) and AUC (97.11%) results. Moreover, the sensitivity of the CS-ResCNN is enhanced by over 13.6% compared to the native CNN method. CONCLUSION Our study provides a practical strategy for addressing imbalanced ophthalmological datasets and has the potential to be applied to other medical images. The developed and deployed CS-ResCNN could serve as computer-aided diagnosis software for ophthalmologists in clinical application.
Collapse
Affiliation(s)
- Jiewei Jiang
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
- School of Software, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
- School of Software, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Wangting Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Lin Liu
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Shuai Wang
- School of Software, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi’an, 710071 China
| | - Jiangtao Cui
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Xiaoyan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Qianzhong Cao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Jing Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Jinghui Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| |
Collapse
|
937
|
Samala RK, Chan HP, Hadjiiski LM, Helvie MA, Cha KH, Richter CD. Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms. Phys Med Biol 2017; 62:8894-8908. [PMID: 29035873 PMCID: PMC5859950 DOI: 10.1088/1361-6560/aa93d4] [Citation(s) in RCA: 96] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Transfer learning in deep convolutional neural networks (DCNNs) is an important step in its application to medical imaging tasks. We propose a multi-task transfer learning DCNN with the aim of translating the 'knowledge' learned from non-medical images to medical diagnostic tasks through supervised training and increasing the generalization capabilities of DCNNs by simultaneously learning auxiliary tasks. We studied this approach in an important application: classification of malignant and benign breast masses. With Institutional Review Board (IRB) approval, digitized screen-film mammograms (SFMs) and digital mammograms (DMs) were collected from our patient files and additional SFMs were obtained from the Digital Database for Screening Mammography. The data set consisted of 2242 views with 2454 masses (1057 malignant, 1397 benign). In single-task transfer learning, the DCNN was trained and tested on SFMs. In multi-task transfer learning, SFMs and DMs were used to train the DCNN, which was then tested on SFMs. N-fold cross-validation with the training set was used for training and parameter optimization. On the independent test set, the multi-task transfer learning DCNN was found to have significantly (p = 0.007) higher performance compared to the single-task transfer learning DCNN. This study demonstrates that multi-task transfer learning may be an effective approach for training DCNN in medical imaging applications when training samples from a single modality are limited.
Collapse
Affiliation(s)
- Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109-5842, United States of America
| | | | | | | | | | | |
Collapse
|
938
|
Carneiro G, Nascimento J, Bradley AP. Automated Analysis of Unregistered Multi-View Mammograms With Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2355-2365. [PMID: 28920897 DOI: 10.1109/tmi.2017.2751523] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We describe an automated methodology for the analysis of unregistered cranio-caudal (CC) and medio-lateral oblique (MLO) mammography views in order to estimate the patient's risk of developing breast cancer. The main innovation behind this methodology lies in the use of deep learning models for the problem of jointly classifying unregistered mammogram views and respective segmentation maps of breast lesions (i.e., masses and micro-calcifications). This is a holistic methodology that can classify a whole mammographic exam, containing the CC and MLO views and the segmentation maps, as opposed to the classification of individual lesions, which is the dominant approach in the field. We also demonstrate that the proposed system is capable of using the segmentation maps generated by automated mass and micro-calcification detection systems, and still producing accurate results. The semi-automated approach (using manually defined mass and micro-calcification segmentation maps) is tested on two publicly available data sets (INbreast and DDSM), and results show that the volume under ROC surface (VUS) for a 3-class problem (normal tissue, benign, and malignant) is over 0.9, the area under ROC curve (AUC) for the 2-class "benign versus malignant" problem is over 0.9, and for the 2-class breast screening problem (malignancy versus normal/benign) is also over 0.9. For the fully automated approach, the VUS results on INbreast is over 0.7, and the AUC for the 2-class "benign versus malignant" problem is over 0.78, and the AUC for the 2-class breast screening is 0.86.
Collapse
|
939
|
Epithelium-Stroma Classification via Convolutional Neural Networks and Unsupervised Domain Adaptation in Histopathological Images. IEEE J Biomed Health Inform 2017; 21:1625-1632. [DOI: 10.1109/jbhi.2017.2691738] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
940
|
Application of deep convolutional neural network for automated detection of myocardial infarction using ECG signals. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.06.027] [Citation(s) in RCA: 474] [Impact Index Per Article: 67.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
|
941
|
Loh BCS, Then PHH. Deep learning for cardiac computer-aided diagnosis: benefits, issues & solutions. Mhealth 2017; 3:45. [PMID: 29184897 PMCID: PMC5682365 DOI: 10.21037/mhealth.2017.09.01] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Accepted: 08/28/2017] [Indexed: 12/27/2022] Open
Abstract
Cardiovascular diseases are one of the top causes of deaths worldwide. In developing nations and rural areas, difficulties with diagnosis and treatment are made worse due to the deficiency of healthcare facilities. A viable solution to this issue is telemedicine, which involves delivering health care and sharing medical knowledge at a distance. Additionally, mHealth, the utilization of mobile devices for medical care, has also proven to be a feasible choice. The integration of telemedicine, mHealth and computer-aided diagnosis systems with the fields of machine and deep learning has enabled the creation of effective services that are adaptable to a multitude of scenarios. The objective of this review is to provide an overview of heart disease diagnosis and management, especially within the context of rural healthcare, as well as discuss the benefits, issues and solutions of implementing deep learning algorithms to improve the efficacy of relevant medical applications.
Collapse
Affiliation(s)
- Brian C S Loh
- Swinburne University of Technology Sarawak Campus, Kuching, Sarawak, Malaysia
| | - Patrick H H Then
- Swinburne University of Technology Sarawak Campus, Kuching, Sarawak, Malaysia
| |
Collapse
|
942
|
Zhen X, Chen J, Zhong Z, Hrycushko B, Zhou L, Jiang S, Albuquerque K, Gu X. Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study. Phys Med Biol 2017; 62:8246-8263. [PMID: 28914611 DOI: 10.1088/1361-6560/aa8d09] [Citation(s) in RCA: 103] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Better understanding of the dose-toxicity relationship is critical for safe dose escalation to improve local control in late-stage cervical cancer radiotherapy. In this study, we introduced a convolutional neural network (CNN) model to analyze rectum dose distribution and predict rectum toxicity. Forty-two cervical cancer patients treated with combined external beam radiotherapy (EBRT) and brachytherapy (BT) were retrospectively collected, including twelve toxicity patients and thirty non-toxicity patients. We adopted a transfer learning strategy to overcome the limited patient data issue. A 16-layers CNN developed by the visual geometry group (VGG-16) of the University of Oxford was pre-trained on a large-scale natural image database, ImageNet, and fine-tuned with patient rectum surface dose maps (RSDMs), which were accumulated EBRT + BT doses on the unfolded rectum surface. We used the adaptive synthetic sampling approach and the data augmentation method to address the two challenges, data imbalance and data scarcity. The gradient-weighted class activation maps (Grad-CAM) were also generated to highlight the discriminative regions on the RSDM along with the prediction model. We compare different CNN coefficients fine-tuning strategies, and compare the predictive performance using the traditional dose volume parameters, e.g. D 0.1/1/2cc, and the texture features extracted from the RSDM. Satisfactory prediction performance was achieved with the proposed scheme, and we found that the mean Grad-CAM over the toxicity patient group has geometric consistence of distribution with the statistical analysis result, which indicates possible rectum toxicity location. The evaluation results have demonstrated the feasibility of building a CNN-based rectum dose-toxicity prediction model with transfer learning for cervical cancer radiotherapy.
Collapse
Affiliation(s)
- Xin Zhen
- Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, TX 75390, United States of America. Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | | | | | | | | | | | | | | |
Collapse
|
943
|
Exploring the Notion of Context in Medical Data. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2017. [PMID: 28971415 DOI: 10.1007/978-3-319-57348-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Scientific and technological knowledge and skills are becoming crucial for most data analysis activities. Two rather distinct, but at the same time collaborating, domains are the ones of computer science and medicine; the former offers significant aid towards a more efficient understanding of the latter's research trends. Still, the process of meaningfully analyzing and understanding medical information and data is a tedious one, bound to several challenges. One of them is the efficient utilization of contextual information in the process leading to optimized, context-aware data analysis results. Nowadays, researchers are provided with tools and opportunities to analytically study medical data, but at the same time significant and rather complex computational challenges are yet to be tackled, among others due to the humanistic nature and increased rate of new content and information production imposed by related hardware and applications. So, the ultimate goal of this position paper is to provide interested parties an overview of major contextual information types to be identified within the medical data processing framework.
Collapse
|
944
|
Liu GS, Zhu MH, Kim J, Raphael P, Applegate BE, Oghalai JS. ELHnet: a convolutional neural network for classifying cochlear endolymphatic hydrops imaged with optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2017; 8:4579-4594. [PMID: 29082086 PMCID: PMC5654801 DOI: 10.1364/boe.8.004579] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 09/03/2017] [Accepted: 09/14/2017] [Indexed: 05/22/2023]
Abstract
Detection of endolymphatic hydrops is important for diagnosing Meniere's disease, and can be performed non-invasively using optical coherence tomography (OCT) in animal models as well as potentially in the clinic. Here, we developed ELHnet, a convolutional neural network to classify endolymphatic hydrops in a mouse model using learned features from OCT images of mice cochleae. We trained ELHnet on 2159 training and validation images from 17 mice, using only the image pixels and observer-determined labels of endolymphatic hydrops as the inputs. We tested ELHnet on 37 images from 37 mice that were previously not used, and found that the neural network correctly classified 34 of the 37 mice. This demonstrates an improvement in performance from previous work on computer-aided classification of endolymphatic hydrops. To the best of our knowledge, this is the first deep CNN designed for endolymphatic hydrops classification.
Collapse
Affiliation(s)
- George S. Liu
- Department of Otolaryngology–Head and Neck Surgery, Stanford University, 801 Welch Road, Stanford, CA 94305, USA
| | - Michael H. Zhu
- Department of Computer Science, Stanford University, 353 Serra Mall, Stanford, CA 94305, USA
| | - Jinkyung Kim
- Department of Otolaryngology–Head and Neck Surgery, Stanford University, 801 Welch Road, Stanford, CA 94305, USA
| | - Patrick Raphael
- Department of Otolaryngology–Head and Neck Surgery, Stanford University, 801 Welch Road, Stanford, CA 94305, USA
| | - Brian E. Applegate
- Department of Biomedical Engineering, Texas A&M University, 5059 Emerging Technology Building, 3120 TAMU, College Station, TX 77843, USA
| | - John S. Oghalai
- USC Caruso Department of Otolaryngology-Head and Neck Surgery, 1540 Alcazar, Suite 204M, Los Angeles, CA 90033, USA
| |
Collapse
|
945
|
Classification of Architectural Heritage Images Using Deep Learning Techniques. APPLIED SCIENCES-BASEL 2017. [DOI: 10.3390/app7100992] [Citation(s) in RCA: 64] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
946
|
Zhang X, Hu W, Chen F, Liu J, Yang Y, Wang L, Duan H, Si J. Gastric precancerous diseases classification using CNN with a concise model. PLoS One 2017; 12:e0185508. [PMID: 28950010 PMCID: PMC5614663 DOI: 10.1371/journal.pone.0185508] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Accepted: 09/14/2017] [Indexed: 12/18/2022] Open
Abstract
Gastric precancerous diseases (GPD) may deteriorate into early gastric cancer if misdiagnosed, so it is important to help doctors recognize GPD accurately and quickly. In this paper, we realize the classification of 3-class GPD, namely, polyp, erosion, and ulcer using convolutional neural networks (CNN) with a concise model called the Gastric Precancerous Disease Network (GPDNet). GPDNet introduces fire modules from SqueezeNet to reduce the model size and parameters about 10 times while improving speed for quick classification. To maintain classification accuracy with fewer parameters, we propose an innovative method called iterative reinforced learning (IRL). After training GPDNet from scratch, we apply IRL to fine-tune the parameters whose values are close to 0, and then we take the modified model as a pretrained model for the next training. The result shows that IRL can improve the accuracy about 9% after 6 iterations. The final classification accuracy of our GPDNet was 88.90%, which is promising for clinical GPD recognition.
Collapse
Affiliation(s)
- Xu Zhang
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Biomedical Engineering, Ministry of Education, Zhejiang University, Hangzhou, China
| | - Weiling Hu
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, China
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
| | - Fei Chen
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Jiquan Liu
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Biomedical Engineering, Ministry of Education, Zhejiang University, Hangzhou, China
| | - Yuanhang Yang
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Biomedical Engineering, Ministry of Education, Zhejiang University, Hangzhou, China
| | - Liangjing Wang
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Huilong Duan
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Biomedical Engineering, Ministry of Education, Zhejiang University, Hangzhou, China
| | - Jianmin Si
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, China
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
| |
Collapse
|
947
|
Han S, Kang HK, Jeong JY, Park MH, Kim W, Bang WC, Seong YK. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys Med Biol 2017; 62:7714-7728. [PMID: 28753132 DOI: 10.1088/1361-6560/aa82ec] [Citation(s) in RCA: 177] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.
Collapse
Affiliation(s)
- Seokmin Han
- Korea National University of Transportation, Uiwang-si, Kyunggi-do, Republic of Korea
| | | | | | | | | | | | | |
Collapse
|
948
|
Li H, Giger ML, Huynh BQ, Antropova NO. Deep learning in breast cancer risk assessment: evaluation of convolutional neural networks on a clinical dataset of full-field digital mammograms. J Med Imaging (Bellingham) 2017; 4:041304. [PMID: 28924576 DOI: 10.1117/1.jmi.4.4.041304] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Accepted: 08/18/2017] [Indexed: 01/11/2023] Open
Abstract
To evaluate deep learning in the assessment of breast cancer risk in which convolutional neural networks (CNNs) with transfer learning are used to extract parenchymal characteristics directly from full-field digital mammographic (FFDM) images instead of using computerized radiographic texture analysis (RTA), 456 clinical FFDM cases were included: a "high-risk" BRCA1/2 gene-mutation carriers dataset (53 cases), a "high-risk" unilateral cancer patients dataset (75 cases), and a "low-risk dataset" (328 cases). Deep learning was compared to the use of features from RTA, as well as to a combination of both in the task of distinguishing between high- and low-risk subjects. Similar classification performances were obtained using CNN [area under the curve [Formula: see text]; standard error [Formula: see text]] and RTA ([Formula: see text]; [Formula: see text]) in distinguishing BRCA1/2 carriers and low-risk women. However, in distinguishing unilateral cancer patients and low-risk women, performance was significantly greater with CNN ([Formula: see text]; [Formula: see text]) compared to RTA ([Formula: see text]; [Formula: see text]). Fusion classifiers performed significantly better than the RTA-alone classifiers with AUC values of 0.86 and 0.84 in differentiating BRCA1/2 carriers from low-risk women and unilateral cancer patients from low-risk women, respectively. In conclusion, deep learning extracted parenchymal characteristics from FFDMs performed as well as, or better than, conventional texture analysis in the task of distinguishing between cancer risk populations.
Collapse
Affiliation(s)
- Hui Li
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen L Giger
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Benjamin Q Huynh
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Natalia O Antropova
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
949
|
Zhou L, Yu Q, Xu X, Gu Y, Yang J. Improving dense conditional random field for retinal vessel segmentation by discriminative feature learning and thin-vessel enhancement. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 148:13-25. [PMID: 28774435 DOI: 10.1016/j.cmpb.2017.06.016] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Revised: 05/28/2017] [Accepted: 06/23/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVES As retinal vessels in color fundus images are thin and elongated structures, standard pairwise based random fields, which always suffer the "shrinking bias" problem, are not competent for such segmentation task. Recently, a dense conditional random field (CRF) model has been successfully used in retinal vessel segmentation. Its corresponding energy function is formulated as a linear combination of several unary features and a pairwise term. However, the hand-crafted unary features can be suboptimal in terms of linear models. Here we propose to learn discriminative unary features and enhance thin vessels for pairwise potentials to further improve the segmentation performance. METHODS Our proposed method comprises four main steps: firstly, image preprocessing is applied to eliminate the strong edges around the field of view (FOV) and normalize the luminosity and contrast inside FOV; secondly, a convolutional neural network (CNN) is properly trained to generate discriminative features for linear models; thirdly, a combo of filters are applied to enhance thin vessels, reducing the intensity difference between thin and wide vessels; fourthly, by taking the discriminative features for unary potentials and the thin-vessel enhanced image for pairwise potentials, we adopt the dense CRF model to achieve the final retinal vessel segmentation. The segmentation performance is evaluated on four public datasets (i.e. DRIVE, STARE, CHASEDB1 and HRF). RESULTS Experimental results show that our proposed method improves the performance of the dense CRF model and outperforms other methods when evaluated in terms of F1-score, Matthews correlation coefficient (MCC) and G-mean, three effective metrics for the evaluation of imbalanced binary classification. Specifically, the F1-score, MCC and G-mean are 0.7942, 0.7656, 0.8835 for the DRIVE dataset respectively; 0.8017, 0.7830, 0.8859 for STARE respectively; 0.7644, 0.7398, 0.8579 for CHASEDB1 respectively; and 0.7627, 0.7402, 0.8812 for HRF respectively. CONCLUSIONS The discriminative features learned in CNNs are more effective than hand-crafted ones. Our proposed method performs well in retinal vessel segmentation. The architecture of our method is trainable and can be integrated into computer-aided diagnostic (CAD) systems in the future.
Collapse
Affiliation(s)
- Lei Zhou
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, SEIEE Building 2-427, No. 800, Dongchuan Road, Minhang District, Shanghai, 200240 China.
| | - Qi Yu
- Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xun Xu
- Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yun Gu
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, SEIEE Building 2-427, No. 800, Dongchuan Road, Minhang District, Shanghai, 200240 China
| | - Jie Yang
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, SEIEE Building 2-427, No. 800, Dongchuan Road, Minhang District, Shanghai, 200240 China.
| |
Collapse
|
950
|
Antropova N, Huynh BQ, Giger ML. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Med Phys 2017; 44:5162-5171. [PMID: 28681390 DOI: 10.1002/mp.12453] [Citation(s) in RCA: 201] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 06/12/2017] [Accepted: 06/25/2017] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Deep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing. AIMS We aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features. MATERIALS & METHODS We present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]). RESULTS From ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]). DISCUSSION/CONCLUSION We proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.
Collapse
Affiliation(s)
- Natalia Antropova
- Department of Radiology, University of Chicago, 5841 S Maryland Ave., Chicago, IL, 60637, USA
| | - Benjamin Q Huynh
- Department of Radiology, University of Chicago, 5841 S Maryland Ave., Chicago, IL, 60637, USA
| | - Maryellen L Giger
- Department of Radiology, University of Chicago, 5841 S Maryland Ave., Chicago, IL, 60637, USA
| |
Collapse
|