301
|
Pesteie M, Abolmaesumi P, Rohling RN. Adaptive Augmentation of Medical Data Using Independently Conditional Variational Auto-Encoders. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2807-2820. [PMID: 31059432 DOI: 10.1109/tmi.2019.2914656] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Current deep supervised learning methods typically require large amounts of labeled data for training. Since there is a significant cost associated with clinical data acquisition and labeling, medical datasets used for training these models are relatively small in size. In this paper, we aim to alleviate this limitation by proposing a variational generative model along with an effective data augmentation approach that utilizes the generative model to synthesize data. In our approach, the model learns the probability distribution of image data conditioned on a latent variable and the corresponding labels. The trained model can then be used to synthesize new images for data augmentation. We demonstrate the effectiveness of the approach on two independent clinical datasets consisting of ultrasound images of the spine and magnetic resonance images of the brain. For the spine dataset, a baseline and a residual model achieve an accuracy of 85% and 92%, respectively, using our method compared to 78% and 83% using a conventional training approach for image classification task. For the brain dataset, a baseline and a U-net network achieve an accuracy of 84% and 88%, respectively, in Dice coefficient in tumor segmentation compared to 80% and 83% for the convention training approach.
Collapse
|
302
|
Wu JY, Zhao ZZ, Zhang WY, Liang M, Ou B, Yang HY, Luo BM. Computer-Aided Diagnosis of Solid Breast Lesions With Ultrasound: Factors Associated With False-negative and False-positive Results. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2019; 38:3193-3202. [PMID: 31077414 DOI: 10.1002/jum.15020] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 04/14/2019] [Accepted: 04/19/2019] [Indexed: 06/09/2023]
Abstract
OBJECTIVES To investigate factors that may lead to false-positive or false-negative results in a computer-aided diagnostic system (S-Detect; Samsung Medison Co, Ltd, Seoul, Korea) for ultrasound (US) examinations of solid breast lesions. METHODS This prospective study was approved by the Institutional Review Board of Sun Yat-sen Memorial Hospital. All patients signed and provided written informed consent before biopsy or surgery. From September 2017 to May 2018, 269 consecutive women with 338 solid breast lesions were included. All lesions were examined with US and S-Detect before biopsy or surgical excision. The final US assessments made by radiologists and S-Detect were matched to the pathologic results. Patient and lesion factors in the "true" and "false" S-Detect groups were compared, and multivariate logistic regression analyses were used to identify the factors associated with false S-Detect results. RESULTS The mean age of the patients ± SD was 42.6 ± 12.9 years (range, 18-77 years). Of the 338 lesions, 209 (61.8%) were benign, and 129 (38.2%) were malignant. Larger lesions, the presence of lesion calcifications detected by B-mode US, and grades of 2 and 3 according to Adler et al (Ultrasound Med Biol 1990; 16:553-559) were significantly associated with false-positive S-Detect results (odds ratio [OR], 1.071; P = .006; OR, 5.851; P = .001; OR, 1.726; P = .009, respectively). Smaller lesions and the absence of calcifications detected by B-mode US in malignant solid breast lesions were significantly associated with false-negative S-Detect results (OR, 1.141; P = .015; OR, 7.434; P = .016). CONCLUSIONS Larger benign lesions, the presence of lesion calcifications, and high degrees of vascularity are likely to show false-positive S-Detect results. Smaller malignant lesions and the absence of calcifications are likely to show false-negative S-Detect results.
Collapse
Affiliation(s)
- Jia-Yi Wu
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zi-Zhuo Zhao
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Wen-Yue Zhang
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Ming Liang
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Bing Ou
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Hai-Yun Yang
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Bao-Ming Luo
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
303
|
Costa MGF, Campos JPM, de Aquino E Aquino G, de Albuquerque Pereira WC, Costa Filho CFF. Evaluating the performance of convolutional neural networks with direct acyclic graph architectures in automatic segmentation of breast lesion in US images. BMC Med Imaging 2019; 19:85. [PMID: 31703642 PMCID: PMC6839157 DOI: 10.1186/s12880-019-0389-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Accepted: 10/16/2019] [Indexed: 11/10/2022] Open
Abstract
Background Outlining lesion contours in Ultra Sound (US) breast images is an important step in breast cancer diagnosis. Malignant lesions infiltrate the surrounding tissue, generating irregular contours, with spiculation and angulated margins, whereas benign lesions produce contours with a smooth outline and elliptical shape. In breast imaging, the majority of the existing publications in the literature focus on using Convolutional Neural Networks (CNNs) for segmentation and classification of lesions in mammographic images. In this study our main objective is to assess the ability of CNNs in detecting contour irregularities in breast lesions in US images. Methods In this study we compare the performance of two CNNs with Direct Acyclic Graph (DAG) architecture and one CNN with a series architecture for breast lesion segmentation in US images. DAG and series architectures are both feedforward networks. The difference is that a DAG architecture could have more than one path between the first layer and end layer, whereas a series architecture has only one path from the beginning layer to the end layer. The CNN architectures were evaluated with two datasets. Results With the more complex DAG architecture, the following mean values were obtained for the metrics used to evaluate the segmented contours: global accuracy: 0.956; IOU: 0.876; F measure: 68.77%; Dice coefficient: 0.892. Conclusion The CNN DAG architecture shows the best metric values used for quantitatively evaluating the segmented contours compared with the gold-standard contours. The segmented contours obtained with this architecture also have more details and irregularities, like the gold-standard contours.
Collapse
Affiliation(s)
- Marly Guimarães Fernandes Costa
- Centro de Tecnologia Eletrônica e da Informação/Universidade Federal do Amazonas, Av. General Rodrigo Otávio Jordão Ramos, 3000, Aleixo, Campus Universitário - Setor Norte, Pavilhão Ceteli, Manaus, AM, CEP: 69077-000, Brazil
| | - João Paulo Mendes Campos
- Centro de Tecnologia Eletrônica e da Informação/Universidade Federal do Amazonas, Av. General Rodrigo Otávio Jordão Ramos, 3000, Aleixo, Campus Universitário - Setor Norte, Pavilhão Ceteli, Manaus, AM, CEP: 69077-000, Brazil
| | - Gustavo de Aquino E Aquino
- Centro de Tecnologia Eletrônica e da Informação/Universidade Federal do Amazonas, Av. General Rodrigo Otávio Jordão Ramos, 3000, Aleixo, Campus Universitário - Setor Norte, Pavilhão Ceteli, Manaus, AM, CEP: 69077-000, Brazil
| | | | - Cícero Ferreira Fernandes Costa Filho
- Centro de Tecnologia Eletrônica e da Informação/Universidade Federal do Amazonas, Av. General Rodrigo Otávio Jordão Ramos, 3000, Aleixo, Campus Universitário - Setor Norte, Pavilhão Ceteli, Manaus, AM, CEP: 69077-000, Brazil.
| |
Collapse
|
304
|
A deep supervised approach for ischemic lesion segmentation from multimodal MRI using Fully Convolutional Network. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105685] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
305
|
Zhu H, Zhao H, Song C, Bian Z, Bi Y, Liu T, He X, Yang D, Cai W. MR-Forest: A Deep Decision Framework for False Positive Reduction in Pulmonary Nodule Detection. IEEE J Biomed Health Inform 2019; 24:1652-1663. [PMID: 31634145 DOI: 10.1109/jbhi.2019.2947506] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
With the development of deep learning methods such as convolutional neural network (CNN), the accuracy of automated pulmonary nodule detection has been greatly improved. However, the high computational and storage costs of the large-scale network have been a potential concern for the future widespread clinical application. In this paper, an alternative Multi-ringed (MR)-Forest framework, against the resource-consuming neural networks (NN)-based architectures, has been proposed for false positive reduction in pulmonary nodule detection, which consists of three steps. First, a novel multi-ringed scanning method is used to extract the order ring facets (ORFs) from the surface voxels of the volumetric nodule models; Second, Mesh-LBP and mapping deformation are employed to estimate the texture and shape features. By sliding and resampling the multi-ringed ORFs, feature volumes with different lengths are generated. Finally, the outputs of multi-level are cascaded to predict the candidate class. On 1034 scans merging the dataset from the Affiliated Hospital of Liaoning University of Traditional Chinese Medicine (AH-LUTCM) and the LUNA16 Challenge dataset, our framework performs enough competitiveness than state-of-the-art in false positive reduction task (CPM score of 0.865). Experimental results demonstrate that MR-Forest is a successful solution to satisfy both resource-consuming and effectiveness for automated pulmonary nodule detection. The proposed MR-forest is a general architecture for 3D target detection, it can be easily extended in many other medical imaging analysis tasks, where the growth trend of the targeting object is approximated as a spheroidal expansion.
Collapse
|
306
|
Tao C, Chen K, Han L, Peng Y, Li C, Hua Z, Lin J. New one-step model of breast tumor locating based on deep learning. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:839-856. [PMID: 31306148 DOI: 10.3233/xst-190548] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Chao Tao
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Ke Chen
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Lin Han
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Yulan Peng
- Department of Ultrasound, West China Hospital of Sichuan University, Chengdu, China
| | - Cheng Li
- China-Japan Friendship Hospital, Beijing, China
| | - Zhan Hua
- China-Japan Friendship Hospital, Beijing, China
| | - Jiangli Lin
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
307
|
Fukuda M, Inamoto K, Shibata N, Ariji Y, Yanashita Y, Kutsuna S, Nakata K, Katsumata A, Fujita H, Ariji E. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol 2019; 36:337-343. [PMID: 31535278 DOI: 10.1007/s11282-019-00409-x] [Citation(s) in RCA: 97] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 08/31/2019] [Indexed: 01/31/2023]
Abstract
OBJECTIVES The aim of this study was to evaluate the use of a convolutional neural network (CNN) system for detecting vertical root fracture (VRF) on panoramic radiography. METHODS Three hundred panoramic images containing a total of 330 VRF teeth with clearly visible fracture lines were selected from our hospital imaging database. Confirmation of VRF lines was performed by two radiologists and one endodontist. Eighty percent (240 images) of the 300 images were assigned to a training set and 20% (60 images) to a test set. A CNN-based deep learning model for the detection of VRFs was built using DetectNet with DIGITS version 5.0. To defend test data selection bias and increase reliability, fivefold cross-validation was performed. Diagnostic performance was evaluated using recall, precision, and F measure. RESULTS Of the 330 VRFs, 267 were detected. Twenty teeth without fractures were falsely detected. Recall was 0.75, precision 0.93, and F measure 0.83. CONCLUSIONS The CNN learning model has shown promise as a tool to detect VRFs on panoramic images and to function as a CAD tool.
Collapse
Affiliation(s)
- Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan.
| | - Kyoko Inamoto
- Department of Endodontics, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Naoki Shibata
- Department of Endodontics, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan
| | - Yudai Yanashita
- Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University, Gifu, Japan
| | - Shota Kutsuna
- Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University, Gifu, Japan
| | - Kazuhiko Nakata
- Department of Endodontics, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | | | - Hiroshi Fujita
- Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University, Gifu, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, 2-11 Suemori-dori, Chikusa-ku, Nagoya, 464-8651, Japan
| |
Collapse
|
308
|
Fujisawa Y, Inoue S, Nakamura Y. The Possibility of Deep Learning-Based, Computer-Aided Skin Tumor Classifiers. Front Med (Lausanne) 2019; 6:191. [PMID: 31508420 PMCID: PMC6719629 DOI: 10.3389/fmed.2019.00191] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 08/13/2019] [Indexed: 11/13/2022] Open
Abstract
The incidence of skin tumors has steadily increased. Although most are benign and do not affect survival, some of the more malignant skin tumors present a lethal threat if a delay in diagnosis permits them to become advanced. Ideally, an inspection by an expert dermatologist would accurately detect malignant skin tumors in the early stage; however, it is not practical for every single patient to receive intensive screening by dermatologists. To overcome this issue, many studies are ongoing to develop dermatologist-level, computer-aided diagnostics. Whereas, many systems that can classify dermoscopic images at this dermatologist-equivalent level have been reported, a much fewer number of systems that can classify conventional clinical images have been reported thus far. Recently, the introduction of deep-learning technology, a method that automatically extracts a set of representative features for further classification has dramatically improved classification efficacy. This new technology has the potential to improve the computer classification accuracy of conventional clinical images to the level of skilled dermatologists. In this review, this new technology and present development of computer-aided skin tumor classifiers will be summarized.
Collapse
|
309
|
|
310
|
Zhuang Z, Li N, Joseph Raj AN, Mahesh VGV, Qiu S. An RDAU-NET model for lesion segmentation in breast ultrasound images. PLoS One 2019; 14:e0221535. [PMID: 31442268 PMCID: PMC6707567 DOI: 10.1371/journal.pone.0221535] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2019] [Accepted: 08/08/2019] [Indexed: 11/28/2022] Open
Abstract
Breast cancer is a common gynecological disease that poses a great threat to women health due to its high malignant rate. Breast cancer screening tests are used to find any warning signs or symptoms for early detection and currently, Ultrasound screening is the preferred method for breast cancer diagnosis. The localization and segmentation of the lesions in breast ultrasound (BUS) images are helpful for clinical diagnosis of the disease. In this paper, an RDAU-NET (Residual-Dilated-Attention-Gate-UNet) model is proposed and employed to segment the tumors in BUS images. The model is based on the conventional U-Net, but the plain neural units are replaced with residual units to enhance the edge information and overcome the network performance degradation problem associated with deep networks. To increase the receptive field and acquire more characteristic information, dilated convolutions were used to process the feature maps obtained from the encoder stages. The traditional cropping and copying between the encoder-decoder pipelines were replaced by the Attention Gate modules which enhanced the learning capabilities through suppression of background information. The model, when tested with BUS images with benign and malignant tumor presented excellent segmentation results as compared to other Deep Networks. A variety of quantitative indicators including Accuracy, Dice coefficient, AUC(Area-Under-Curve), Precision, Sensitivity, Specificity, Recall, F1score and M-IOU (Mean-Intersection-Over-Union) provided performances above 80%. The experimental results illustrate that the proposed RDAU-NET model can accurately segment breast lesions when compared to other deep learning models and thus has a good prospect for clinical diagnosis.
Collapse
Affiliation(s)
- Zhemin Zhuang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Nan Li
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Alex Noel Joseph Raj
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Vijayalakshmi G. V. Mahesh
- Department of Electronics and Communication Engineering, BMS Institute of Technology and Management, Bengaluru, Karnataka, India
| | - Shunmin Qiu
- Imaging Department, First Hospital of Medical College of Shantou University, Shantou, Guangdong, China
| |
Collapse
|
311
|
Zhang E, Seiler S, Chen M, Lu W, Gu X. Boundary-aware Semi-supervised Deep Learning for Breast Ultrasound Computer-Aided Diagnosis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:947-950. [PMID: 31946050 DOI: 10.1109/embc.2019.8856539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Breast ultrasound (US) is an effective imaging modality for breast cancer diagnosis. US computer-aided diagnosis (CAD) systems have been developed for decades and have employed either conventional handcrafted features or modern automatic deep-learned features, the former relying on clinical experience and the latter demanding large datasets. In this paper, we developed a novel BASDL method that integrates clinical-approved breast lesion boundary characteristics (features) into a semi-supervised deep learning (SDL) to achieve accurate diagnosis with a small training dataset. Original breast US images are converted to boundary-oriented feature maps (BFMs) using a distance-transformation coupled with a Gaussian filter. Then, the converted BFMs are used as the input of SDL network, which is characterized as lesion classification guided unsupervised image reconstruction based on stacked convolutional auto-encode (SCAE). We compared the performance of BASDL with conventional SCAE method and SDL method that used the original images as inputs, as well as SCAE method that used BFMs as inputs. Experimental results on two breast US datasets show that BASDL ranked the best among the four networks, with classification accuracy around 92.00±2.38%, which indicated that BASDL could be promising for effective breast US lesion CAD using small datasets.
Collapse
|
312
|
|
313
|
A Novel Multispace Image Reconstruction Method for Pathological Image Classification Based on Structural Information. BIOMED RESEARCH INTERNATIONAL 2019; 2019:3530903. [PMID: 31111048 PMCID: PMC6487174 DOI: 10.1155/2019/3530903] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 03/13/2019] [Accepted: 03/28/2019] [Indexed: 12/13/2022]
Abstract
Pathological image classification is of great importance in various biomedical applications, such as for lesion detection, cancer subtype identification, and pathological grading. To this end, this paper proposed a novel classification framework using the multispace image reconstruction inputs and the transfer learning technology. Specifically, a multispace image reconstruction method was first developed to generate a new image containing three channels composed of gradient, gray level cooccurrence matrix (GLCM) and local binary pattern (LBP) spaces, respectively. Then, the pretrained VGG-16 net was utilized to extract the high-level semantic features of original images (RGB) and reconstructed images. Subsequently, the long short-term memory (LSTM) layer was used for feature selection and refinement while increasing its discrimination capability. Finally, the classification task was performed via the softmax classifier. Our framework was evaluated on a publicly available microscopy image dataset of IICBU malignant lymphoma. Experimental results demonstrated the performance advantages of our proposed classification framework by comparing with the related works.
Collapse
|
314
|
Ciritsis A, Rossi C, Eberhard M, Marcon M, Becker AS, Boss A. Automatic classification of ultrasound breast lesions using a deep convolutional neural network mimicking human decision-making. Eur Radiol 2019; 29:5458-5468. [PMID: 30927100 DOI: 10.1007/s00330-019-06118-7] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 02/06/2019] [Accepted: 02/15/2019] [Indexed: 12/20/2022]
Abstract
OBJECTIVES To evaluate a deep convolutional neural network (dCNN) for detection, highlighting, and classification of ultrasound (US) breast lesions mimicking human decision-making according to the Breast Imaging Reporting and Data System (BI-RADS). METHODS AND MATERIALS One thousand nineteen breast ultrasound images from 582 patients (age 56.3 ± 11.5 years) were linked to the corresponding radiological report. Lesions were categorized into the following classes: no tissue, normal breast tissue, BI-RADS 2 (cysts, lymph nodes), BI-RADS 3 (non-cystic mass), and BI-RADS 4-5 (suspicious). To test the accuracy of the dCNN, one internal dataset (101 images) and one external test dataset (43 images) were evaluated by the dCNN and two independent readers. Radiological reports, histopathological results, and follow-up examinations served as reference. The performances of the dCNN and the humans were quantified in terms of classification accuracies and receiver operating characteristic (ROC) curves. RESULTS In the internal test dataset, the classification accuracy of the dCNN differentiating BI-RADS 2 from BI-RADS 3-5 lesions was 87.1% (external 93.0%) compared with that of human readers with 79.2 ± 1.9% (external 95.3 ± 2.3%). For the classification of BI-RADS 2-3 versus BI-RADS 4-5, the dCNN reached a classification accuracy of 93.1% (external 95.3%), whereas the classification accuracy of humans yielded 91.6 ± 5.4% (external 94.1 ± 1.2%). The AUC on the internal dataset was 83.8 (external 96.7) for the dCNN and 84.6 ± 2.3 (external 90.9 ± 2.9) for the humans. CONCLUSION dCNNs may be used to mimic human decision-making in the evaluation of single US images of breast lesion according to the BI-RADS catalog. The technique reaches high accuracies and may serve for standardization of highly observer-dependent US assessment. KEY POINTS • Deep convolutional neural networks could be used to classify US breast lesions. • The implemented dCNN with its sliding window approach reaches high accuracies in the classification of US breast lesions. • Deep convolutional neural networks may serve for standardization in US BI-RADS classification.
Collapse
Affiliation(s)
- Alexander Ciritsis
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland.
| | - Cristina Rossi
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Matthias Eberhard
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Magda Marcon
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Anton S Becker
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Andreas Boss
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| |
Collapse
|
315
|
Singla N, Dubey K, Srivastava V. Automated assessment of breast cancer margin in optical coherence tomography images via pretrained convolutional neural network. JOURNAL OF BIOPHOTONICS 2019; 12:e201800255. [PMID: 30318761 DOI: 10.1002/jbio.201800255] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2018] [Accepted: 10/12/2018] [Indexed: 06/08/2023]
Abstract
The benchmark method for the evaluation of breast cancers involves microscopic testing of a hematoxylin and eosin (H&E)-stained tissue biopsy. Resurgery is required in 20% to 30% of cases because of incomplete excision of malignant tissues. Therefore, a more accurate method is required to detect the cancer margin to avoid the risk of recurrence. In the recent years, convolutional neural networks (CNNs) has achieved excellent performance in the field of medical images diagnosis. It automatically extracts the features from the images and classifies them. In the proposed study, we apply a pretrained Inception-v3 CNN with reverse active learning for the classification of healthy and malignancy breast tissue using optical coherence tomography (OCT) images. This proposed method attained the sensitivity, specificity and accuracy is 90.2%, 91.7% and 90%, respectively, with testing datasets collected from 48 patients (22 normal fibro-adipose tissue and 26 Invasive ductal carcinomas cancerous tissues). The trained network utilizes for the breast cancer margin assessment to predict the tumor with negative margins. Additionally, the network output is correlated with the corresponding histology image. Our results lay the foundation for the future that the proposed method can be used to perform automatic intraoperative identification of breast cancer margins in real-time and to guide core needle biopsies.
Collapse
Affiliation(s)
- Neeru Singla
- Department of Electrical and Instrumentation Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India
| | - Kavita Dubey
- Department of Electrical and Instrumentation Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India
| | - Vishal Srivastava
- Department of Electrical and Instrumentation Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India
- Department of Electrical and Computer Engineering, University of California Los Angeles, Los Angeles, California
| |
Collapse
|
316
|
Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, Allison T, Arnaout O, Abbosh C, Dunn IF, Mak RH, Tamimi RM, Tempany CM, Swanton C, Hoffmann U, Schwartz LH, Gillies RJ, Huang RY, Aerts HJWL. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J Clin 2019; 69:127-157. [PMID: 30720861 PMCID: PMC6403009 DOI: 10.3322/caac.21552] [Citation(s) in RCA: 661] [Impact Index Per Article: 132.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Collapse
Affiliation(s)
- Wenya Linda Bi
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Ahmed Hosny
- Research Scientist, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Matthew B. Schabath
- Associate Member, Department of Cancer EpidemiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Maryellen L. Giger
- Professor of Radiology, Department of RadiologyUniversity of ChicagoChicagoIL
| | - Nicolai J. Birkbak
- Research Associate, The Francis Crick InstituteLondonUnited Kingdom
- Research Associate, University College London Cancer InstituteLondonUnited Kingdom
| | - Alireza Mehrtash
- Research Assistant, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Research Assistant, Department of Electrical and Computer EngineeringUniversity of British ColumbiaVancouverBCCanada
| | - Tavis Allison
- Research Assistant, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Research Assistant, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Omar Arnaout
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Christopher Abbosh
- Research Fellow, The Francis Crick InstituteLondonUnited Kingdom
- Research Fellow, University College London Cancer InstituteLondonUnited Kingdom
| | - Ian F. Dunn
- Associate Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Raymond H. Mak
- Associate Professor, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Rulla M. Tamimi
- Associate Professor, Department of MedicineBrigham and Women’s Hospital, Dana‐Farber Cancer Institute, Harvard Medical SchoolBostonMA
| | - Clare M. Tempany
- Professor of Radiology, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Charles Swanton
- Professor, The Francis Crick InstituteLondonUnited Kingdom
- Professor, University College London Cancer InstituteLondonUnited Kingdom
| | - Udo Hoffmann
- Professor of Radiology, Department of RadiologyMassachusetts General Hospital and Harvard Medical SchoolBostonMA
| | - Lawrence H. Schwartz
- Professor of Radiology, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Chair, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Robert J. Gillies
- Professor of Radiology, Department of Cancer PhysiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Raymond Y. Huang
- Assistant Professor, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Hugo J. W. L. Aerts
- Associate Professor, Departments of Radiation Oncology and Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Professor in AI in Medicine, Radiology and Nuclear Medicine, GROWMaastricht University Medical Centre (MUMC+)MaastrichtThe Netherlands
| |
Collapse
|
317
|
Shin SY, Lee S, Yun ID, Kim SM, Lee KM. Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:762-774. [PMID: 30273145 DOI: 10.1109/tmi.2018.2872031] [Citation(s) in RCA: 60] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We propose a framework for localization and classification of masses in breast ultrasound images. We have experimentally found that training convolutional neural network-based mass detectors with large, weakly annotated datasets presents a non-trivial problem, while overfitting may occur with those trained with small, strongly annotated datasets. To overcome these problems, we use a weakly annotated dataset together with a smaller strongly annotated dataset in a hybrid manner. We propose a systematic weakly and semi-supervised training scenario with appropriate training loss selection. Experimental results show that the proposed method can successfully localize and classify masses with less annotation effort. The results trained with only 10 strongly annotated images along with weakly annotated images were comparable to results trained from 800 strongly annotated images, with the 95% confidence interval (CI) of difference -3%-5%, in terms of the correct localization (CorLoc) measure, which is the ratio of images with intersection over union with ground truth higher than 0.5. With the same number of strongly annotated images, additional weakly annotated images can be incorporated to give a 4.5% point increase in CorLoc, from 80% to 84.50% (with 95% CIs 76%-83.75% and 81%-88%). The effects of different algorithmic details and varied amount of data are presented through ablative analysis.
Collapse
|
318
|
Wu GG, Zhou LQ, Xu JW, Wang JY, Wei Q, Deng YB, Cui XW, Dietrich CF. Artificial intelligence in breast ultrasound. World J Radiol 2019; 11:19-26. [PMID: 30858931 PMCID: PMC6403465 DOI: 10.4329/wjr.v11.i2.19] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 01/14/2019] [Accepted: 01/27/2019] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is gaining extensive attention for its excellent performance in image-recognition tasks and increasingly applied in breast ultrasound. AI can conduct a quantitative assessment by recognizing imaging information automatically and make more accurate and reproductive imaging diagnosis. Breast cancer is the most commonly diagnosed cancer in women, severely threatening women’s health, the early screening of which is closely related to the prognosis of patients. Therefore, utilization of AI in breast cancer screening and detection is of great significance, which can not only save time for radiologists, but also make up for experience and skill deficiency on some beginners. This article illustrates the basic technical knowledge regarding AI in breast ultrasound, including early machine learning algorithms and deep learning algorithms, and their application in the differential diagnosis of benign and malignant masses. At last, we talk about the future perspectives of AI in breast ultrasound.
Collapse
Affiliation(s)
- Ge-Ge Wu
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Li-Qiang Zhou
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Jian-Wei Xu
- Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, Henan Province, China
| | - Jia-Yu Wang
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Qi Wei
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - You-Bin Deng
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Xin-Wu Cui
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Christoph F Dietrich
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
- Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Würzburg, Würzburg 97980, Germany
| |
Collapse
|
319
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 283] [Impact Index Per Article: 56.6] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
320
|
Huang Y, Han L, Dou H, Luo H, Yuan Z, Liu Q, Zhang J, Yin G. Two-stage CNNs for computerized BI-RADS categorization in breast ultrasound images. Biomed Eng Online 2019; 18:8. [PMID: 30678680 PMCID: PMC6346503 DOI: 10.1186/s12938-019-0626-5] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Accepted: 01/16/2019] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND Quantizing the Breast Imaging Reporting and Data System (BI-RADS) criteria into different categories with the single ultrasound modality has always been a challenge. To achieve this, we proposed a two-stage grading system to automatically evaluate breast tumors from ultrasound images into five categories based on convolutional neural networks (CNNs). METHODS This new developed automatic grading system was consisted of two stages, including the tumor identification and the tumor grading. The constructed network for tumor identification, denoted as ROI-CNN, can identify the region contained the tumor from the original breast ultrasound images. The following tumor categorization network, denoted as G-CNN, can generate effective features for differentiating the identified regions of interest (ROIs) into five categories: Category "3", Category "4A", Category "4B", Category "4C", and Category "5". Particularly, to promote the predictions identified by the ROI-CNN better tailor to the tumor, refinement procedure based on Level-set was leveraged as a joint between the stage and grading stage. RESULTS We tested the proposed two-stage grading system against 2238 cases with breast tumors in ultrasound images. With the accuracy as an indicator, our automatic computerized evaluation for grading breast tumors exhibited a performance comparable to that of subjective categories determined by physicians. Experimental results show that our two-stage framework can achieve the accuracy of 0.998 on Category "3", 0.940 on Category "4A", 0.734 on Category "4B", 0.922 on Category "4C", and 0.876 on Category "5". CONCLUSION The proposed scheme can extract effective features from the breast ultrasound images for the final classification of breast tumors by decoupling the identification features and classification features with different CNNs. Besides, the proposed scheme can extend the diagnosing of breast tumors in ultrasound images to five sub-categories according to BI-RADS rather than merely distinguishing the breast tumor malignant from benign.
Collapse
Affiliation(s)
- Yunzhi Huang
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, 610065, China
- College of Electrical Engineering and Information Technology, Sichuan University, Chengdu, 610065, China
| | - Luyi Han
- College of Electrical Engineering and Information Technology, Sichuan University, Chengdu, 610065, China
| | - Haoran Dou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University, Shenzhen, 518060, China
| | - Honghao Luo
- Department of Ultrasound, West China Hospital of Sichuan University, Chengdu, 610041, China
| | - Zhen Yuan
- Bioimaging Core, Faculty of Health Sciences, University of Macau, Macau SAR, China
| | - Qi Liu
- College of Electrical Engineering and Information Technology, Sichuan University, Chengdu, 610065, China.
| | - Jiang Zhang
- College of Electrical Engineering and Information Technology, Sichuan University, Chengdu, 610065, China
| | - Guangfu Yin
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, 610065, China.
| |
Collapse
|
321
|
Byra M, Galperin M, Ojeda‐Fournier H, Olson L, O'Boyle M, Comstock C, Andre M. Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion. Med Phys 2019; 46:746-755. [PMID: 30589947 DOI: 10.1002/mp.13361] [Citation(s) in RCA: 124] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 12/13/2018] [Accepted: 12/18/2018] [Indexed: 12/24/2022] Open
Affiliation(s)
- Michal Byra
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
- Department of Ultrasound Institute of Fundamental Technological Research Polish Academy of Sciences Pawinskiego 5B 02‐106 Warsaw Poland
| | | | - Haydee Ojeda‐Fournier
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| | - Linda Olson
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| | - Mary O'Boyle
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| | | | - Michael Andre
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| |
Collapse
|
322
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 379] [Impact Index Per Article: 75.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|
323
|
Yap MH, Goyal M, Osman FM, Martí R, Denton E, Juette A, Zwiggelaar R. Breast ultrasound lesions recognition: end-to-end deep learning approaches. J Med Imaging (Bellingham) 2019; 6:011007. [PMID: 30310824 PMCID: PMC6177528 DOI: 10.1117/1.jmi.6.1.011007] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Accepted: 08/20/2018] [Indexed: 11/14/2022] Open
Abstract
Multistage processing of automated breast ultrasound lesions recognition is dependent on the performance of prior stages. To improve the current state of the art, we propose the use of end-to-end deep learning approaches using fully convolutional networks (FCNs), namely FCN-AlexNet, FCN-32s, FCN-16s, and FCN-8s for semantic segmentation of breast lesions. We use pretrained models based on ImageNet and transfer learning to overcome the issue of data deficiency. We evaluate our results on two datasets, which consist of a total of 113 malignant and 356 benign lesions. To assess the performance, we conduct fivefold cross validation using the following split: 70% for training data, 10% for validation data, and 20% testing data. The results showed that our proposed method performed better on benign lesions, with a top "mean Dice" score of 0.7626 with FCN-16s, when compared with the malignant lesions with a top mean Dice score of 0.5484 with FCN-8s. When considering the number of images with Dice score > 0.5 , 89.6% of the benign lesions were successfully segmented and correctly recognised, whereas 60.6% of the malignant lesions were successfully segmented and correctly recognized. We conclude the paper by addressing the future challenges of the work.
Collapse
Affiliation(s)
- Moi Hoon Yap
- Manchester Metropolitan University, School of Computing, Mathematics and Digital Technology, Faculty of Science and Engineering, Manchester, United Kingdom
| | - Manu Goyal
- Manchester Metropolitan University, School of Computing, Mathematics and Digital Technology, Faculty of Science and Engineering, Manchester, United Kingdom
| | - Fatima M. Osman
- Sudan University of Science and Technology, Department of Computer Science, Khartoum, Sudan
| | - Robert Martí
- University of Girona, Computer Vision and Robotics Institute, Girona, Spain
| | - Erika Denton
- Norfolk and Norwich University Hospitals Foundation Trust, Breast Imaging, Norwich, United Kingdom
| | - Arne Juette
- Norfolk and Norwich University Hospitals Foundation Trust, Breast Imaging, Norwich, United Kingdom
| | - Reyer Zwiggelaar
- Aberystwyth University, Department of Computer Science, Aberystwyth, United Kingdom
| |
Collapse
|
324
|
Qi X, Zhang L, Chen Y, Pi Y, Chen Y, Lv Q, Yi Z. Automated diagnosis of breast ultrasonography images using deep neural networks. Med Image Anal 2018; 52:185-198. [PMID: 30594771 DOI: 10.1016/j.media.2018.12.006] [Citation(s) in RCA: 103] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2018] [Revised: 11/26/2018] [Accepted: 12/19/2018] [Indexed: 02/05/2023]
Abstract
Ultrasonography images of breast mass aid in the detection and diagnosis of breast cancer. Manually analyzing ultrasonography images is time-consuming, exhausting and subjective. Automated analyzing such images is desired. In this study, we develop an automated breast cancer diagnosis model for ultrasonography images. Traditional methods of automated ultrasonography images analysis employ hand-crafted features to classify images, and lack robustness to the variation in the shapes, size and texture of breast lesions, leading to low sensitivity in clinical applications. To overcome these shortcomings, we propose a method to diagnose breast ultrasonography images using deep convolutional neural networks with multi-scale kernels and skip connections. Our method consists of two components: the first one is to determine whether there are malignant tumors in the image, and the second one is to recognize solid nodules. In order to let the two networks work in a collaborative way, a region enhance mechanism based on class activation maps is proposed. The mechanism helps to improve classification accuracy and sensitivity for both networks. A cross training algorithm is introduced to train the networks. We construct a large annotated dataset containing a total of 8145 breast ultrasonography images to train and evaluate the models. All of the annotations are proven by pathological records. The proposed method is compared with two state-of-the-art approaches, and outperforms both of them by a large margin. Experimental results show that our approach achieves a performance comparable to human sonographers and can be applied to clinical scenarios.
Collapse
Affiliation(s)
- Xiaofeng Qi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, PR China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, PR China
| | - Yao Chen
- Department of Galactophore Surgery, West China Hospital, Sichuan University, Chengdu, 610041, PR China
| | - Yong Pi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, PR China
| | - Yi Chen
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, PR China
| | - Qing Lv
- Department of Galactophore Surgery, West China Hospital, Sichuan University, Chengdu, 610041, PR China.
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, PR China.
| |
Collapse
|
325
|
Lee CY, Chen GL, Zhang ZX, Chou YH, Hsu CC. Is Intensity Inhomogeneity Correction Useful for Classification of Breast Cancer in Sonograms Using Deep Neural Network? JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:8413403. [PMID: 30651947 PMCID: PMC6311841 DOI: 10.1155/2018/8413403] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Revised: 10/25/2018] [Accepted: 11/18/2018] [Indexed: 11/17/2022]
Abstract
The sonogram is currently an effective cancer screening and diagnosis way due to the convenience and harmlessness in humans. Traditionally, lesion boundary segmentation is first adopted and then classification is conducted, to reach the judgment of benign or malignant tumor. In addition, sonograms often contain much speckle noise and intensity inhomogeneity. This study proposes a novel benign or malignant tumor classification system, which comprises intensity inhomogeneity correction and stacked denoising autoencoder (SDAE), and it is suitable for small-size dataset. A classifier is established by extracting features in the multilayer training of SDAE; automatic analysis of imaging features by the deep learning algorithm is applied on image classification, thus allowing the system to have high efficiency and robust distinguishing. In this study, two kinds of dataset (private data and public data) are used for deep learning models training. For each dataset, two groups of test images are compared: the original images and the images after intensity inhomogeneity correction, respectively. The results show that when deep learning algorithm is applied on the sonograms after intensity inhomogeneity correction, there is a significant increase of the tumor distinguishing accuracy. This study demonstrated that it is important to use preprocessing to highlight the image features and further give these features for deep learning models. In this way, the classification accuracy will be better to just use the original images for deep learning.
Collapse
Affiliation(s)
- Chia-Yen Lee
- Department of Electrical Engineering, National United University, Miaoli, Taiwan
| | - Guan-Lin Chen
- Department of Electrical Engineering, National United University, Miaoli, Taiwan
| | - Zhong-Xuan Zhang
- Department of Electrical Engineering, National United University, Miaoli, Taiwan
| | - Yi-Hong Chou
- Department of Radiology, Taipei Veterans General Hospital and National Yang Ming University, Taipei, Taiwan
| | - Chih-Chung Hsu
- Department of Management Information Systems, National Pingtung University of Science and Technology, Neipu, Taiwan
| |
Collapse
|
326
|
Goyal M, Reeves ND, Rajbhandari S, Yap MH. Robust Methods for Real-Time Diabetic Foot Ulcer Detection and Localization on Mobile Devices. IEEE J Biomed Health Inform 2018; 23:1730-1741. [PMID: 30188841 DOI: 10.1109/jbhi.2018.2868656] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Current practice for diabetic foot ulcers (DFU) screening involves detection and localization by podiatrists. Existing automated solutions either focus on segmentation or classification. In this work, we design deep learning methods for real-time DFU localization. To produce a robust deep learning model, we collected an extensive database of 1775 images of DFU. Two medical experts produced the ground truths of this data set by outlining the region of interest of DFU with an annotator software. Using five-fold cross-validation, overall, faster R-CNN with InceptionV2 model using two-tier transfer learning achieved a mean average precision of 91.8%, the speed of 48 ms for inferencing a single image and with a model size of 57.2 MB. To demonstrate the robustness and practicality of our solution to real-time prediction, we evaluated the performance of the models on a NVIDIA Jetson TX2 and a smartphone app. This work demonstrates the capability of deep learning in real-time localization of DFU, which can be further improved with a more extensive data set.
Collapse
|
327
|
Tan T, Li Z, Liu H, Zanjani FG, Ouyang Q, Tang Y, Hu Z, Li Q. Optimize Transfer Learning for Lung Diseases in Bronchoscopy Using a New Concept: Sequential Fine-Tuning. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE-JTEHM 2018; 6:1800808. [PMID: 30324036 PMCID: PMC6175035 DOI: 10.1109/jtehm.2018.2865787] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2018] [Revised: 08/01/2018] [Accepted: 08/03/2018] [Indexed: 12/20/2022]
Abstract
Bronchoscopy inspection, as a follow-up procedure next to the radiological imaging, plays a key role in the diagnosis and treatment design for lung disease patients. When performing bronchoscopy, doctors have to make a decision immediately whether to perform a biopsy. Because biopsies may cause uncontrollable and life-threatening bleeding of the lung tissue, thus doctors need to be selective with biopsies. In this paper, to help doctors to be more selective on biopsies and provide a second opinion on diagnosis, we propose a computer-aided diagnosis (CAD) system for lung diseases, including cancers and tuberculosis (TB). Based on transfer learning (TL), we propose a novel TL method on the top of DenseNet: sequential fine-tuning (SFT). Compared with traditional fine-tuning (FT) methods, our method achieves the best performance. In a data set of recruited 81 normal cases, 76 TB cases and 277 lung cancer cases, SFT provided an overall accuracy of 82% while other traditional TL methods achieved an accuracy from 70% to 74%. The detection accuracy of SFT for cancers, TB, and normal cases are 87%, 54%, and 91%, respectively. This indicates that the CAD system has the potential to improve lung disease diagnosis accuracy in bronchoscopy and it may be used to be more selective with biopsies.
Collapse
Affiliation(s)
- Tao Tan
- Department of Biomedical EngineeringEindhoven University of Technology5600 MBEindhovenThe Netherlands.,ScreenPoint Medical6512 ABNijmegenThe Netherlands
| | - Zhang Li
- College of Aerospace Science and EngineeringNational University of Defense TechnologyChangsha410073China
| | - Haixia Liu
- School Of Computer ScienceUniversity of Nottingham Malaysia Campus43500SemenyihMalaysia
| | - Farhad G Zanjani
- Department of Electrical EngineeringEindhoven University of Technology5600 MBEindhovenThe Netherlands
| | - Quchang Ouyang
- Hunan Cancer Hospital, The Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangsha410000China
| | - Yuling Tang
- First Hospital of Changsha CityChangsha410000China
| | - Zheyu Hu
- Hunan Cancer Hospital, The Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangsha410000China
| | - Qiang Li
- Department of Respiratory MedicineShanghai East HospitalTongji University School of MedicineShanghai200120China
| |
Collapse
|
328
|
Core JQ, Mehrabi M, Robinson ZR, Ochs AR, McCarthy LA, Zaragoza MV, Grosberg A. Age of heart disease presentation and dysmorphic nuclei in patients with LMNA mutations. PLoS One 2017; 12:e0188256. [PMID: 29149195 PMCID: PMC5693421 DOI: 10.1371/journal.pone.0188256] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Accepted: 11/05/2017] [Indexed: 01/24/2023] Open
Abstract
Nuclear shape defects are a distinguishing characteristic in laminopathies, cancers, and other pathologies. Correlating these defects to the symptoms, mechanisms, and progression of disease requires unbiased, quantitative, and high-throughput means of quantifying nuclear morphology. To accomplish this, we developed a method of automatically segmenting fluorescently stained nuclei in 2D microscopy images and then classifying them as normal or dysmorphic based on three geometric features of the nucleus using a package of Matlab codes. As a test case, cultured skin-fibroblast nuclei of individuals possessing LMNA splice-site mutation (c.357-2A>G), LMNA nonsense mutation (c.736 C>T, pQ246X) in exon 4, LMNA missense mutation (c.1003C>T, pR335W) in exon 6, Hutchinson-Gilford Progeria Syndrome, and no LMNA mutations were analyzed. For each cell type, the percentage of dysmorphic nuclei, and other morphological features such as average nuclear area and average eccentricity were obtained. Compared to blind observers, our procedure implemented in Matlab codes possessed similar accuracy to manual counting of dysmorphic nuclei while being significantly more consistent. The automatic quantification of nuclear defects revealed a correlation between in vitro results and age of patients for initial symptom onset. Our results demonstrate the method’s utility in experimental studies of diseases affecting nuclear shape through automated, unbiased, and accurate identification of dysmorphic nuclei.
Collapse
Affiliation(s)
- Jason Q. Core
- Departments of Biomedical Engineering, University of California, Irvine, CA, United States of America
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, Irvine, CA, United States of America
| | - Mehrsa Mehrabi
- Departments of Biomedical Engineering, University of California, Irvine, CA, United States of America
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, Irvine, CA, United States of America
| | - Zachery R. Robinson
- Departments of Biomedical Engineering, University of California, Irvine, CA, United States of America
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, Irvine, CA, United States of America
| | - Alexander R. Ochs
- Departments of Biomedical Engineering, University of California, Irvine, CA, United States of America
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, Irvine, CA, United States of America
| | - Linda A. McCarthy
- Departments of Biomedical Engineering, University of California, Irvine, CA, United States of America
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, Irvine, CA, United States of America
| | - Michael V. Zaragoza
- Pediatrics–Genetics & Genomics Division–School of Medicine, University of California, Irvine, CA, United States of America
- Biological Chemistry–School of Medicine, University of California, Irvine, CA, United States of America
| | - Anna Grosberg
- Departments of Biomedical Engineering, University of California, Irvine, CA, United States of America
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, Irvine, CA, United States of America
- Chemical Engineering and Materials Science, University of California, Irvine, CA, United States of America
- * E-mail:
| |
Collapse
|