1
|
Bhalla D, Rangarajan K, Chandra T, Banerjee S, Arora C. Reproducibility and Explainability of Deep Learning in Mammography: A Systematic Review of Literature. Indian J Radiol Imaging 2024; 34:469-487. [PMID: 38912238 PMCID: PMC11188703 DOI: 10.1055/s-0043-1775737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background Although abundant literature is currently available on the use of deep learning for breast cancer detection in mammography, the quality of such literature is widely variable. Purpose To evaluate published literature on breast cancer detection in mammography for reproducibility and to ascertain best practices for model design. Methods The PubMed and Scopus databases were searched to identify records that described the use of deep learning to detect lesions or classify images into cancer or noncancer. A modification of Quality Assessment of Diagnostic Accuracy Studies (mQUADAS-2) tool was developed for this review and was applied to the included studies. Results of reported studies (area under curve [AUC] of receiver operator curve [ROC] curve, sensitivity, specificity) were recorded. Results A total of 12,123 records were screened, of which 107 fit the inclusion criteria. Training and test datasets, key idea behind model architecture, and results were recorded for these studies. Based on mQUADAS-2 assessment, 103 studies had high risk of bias due to nonrepresentative patient selection. Four studies were of adequate quality, of which three trained their own model, and one used a commercial network. Ensemble models were used in two of these. Common strategies used for model training included patch classifiers, image classification networks (ResNet in 67%), and object detection networks (RetinaNet in 67%). The highest reported AUC was 0.927 ± 0.008 on a screening dataset, while it reached 0.945 (0.919-0.968) on an enriched subset. Higher values of AUC (0.955) and specificity (98.5%) were reached when combined radiologist and Artificial Intelligence readings were used than either of them alone. None of the studies provided explainability beyond localization accuracy. None of the studies have studied interaction between AI and radiologist in a real world setting. Conclusion While deep learning holds much promise in mammography interpretation, evaluation in a reproducible clinical setting and explainable networks are the need of the hour.
Collapse
Affiliation(s)
- Deeksha Bhalla
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Tany Chandra
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Subhashis Banerjee
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| |
Collapse
|
2
|
Bai J, Jin A, Adams M, Yang C, Nabavi S. Unsupervised feature correlation model to predict breast abnormal variation maps in longitudinal mammograms. Comput Med Imaging Graph 2024; 113:102341. [PMID: 38277769 DOI: 10.1016/j.compmedimag.2024.102341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 01/18/2024] [Accepted: 01/18/2024] [Indexed: 01/28/2024]
Abstract
Breast cancer continues to be a significant cause of mortality among women globally. Timely identification and precise diagnosis of breast abnormalities are critical for enhancing patient prognosis. In this study, we focus on improving the early detection and accurate diagnosis of breast abnormalities, which is crucial for improving patient outcomes and reducing the mortality rate of breast cancer. To address the limitations of traditional screening methods, a novel unsupervised feature correlation network was developed to predict maps indicating breast abnormal variations using longitudinal 2D mammograms. The proposed model utilizes the reconstruction process of current year and prior year mammograms to extract tissue from different areas and analyze the differences between them to identify abnormal variations that may indicate the presence of cancer. The model incorporates a feature correlation module, an attention suppression gate, and a breast abnormality detection module, all working together to improve prediction accuracy. The proposed model not only provides breast abnormal variation maps but also distinguishes between normal and cancer mammograms, making it more advanced compared to the state-of-the-art baseline models. The results of the study show that the proposed model outperforms the baseline models in terms of Accuracy, Sensitivity, Specificity, Dice score, and cancer detection rate.
Collapse
Affiliation(s)
- Jun Bai
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA
| | - Annie Jin
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Madison Adams
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Clifford Yang
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA; Department of Radiology, UConn Health, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA.
| |
Collapse
|
3
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
4
|
Zhu Z, Wang SH, Zhang YD. A Survey of Convolutional Neural Network in Breast Cancer. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 136:2127-2172. [PMID: 37152661 PMCID: PMC7614504 DOI: 10.32604/cmes.2023.025484] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/28/2022] [Indexed: 05/09/2023]
Abstract
Problems For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers. Aims A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD). Methods We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation. Conclusion Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.
Collapse
Affiliation(s)
| | | | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
5
|
Castro E, Costa Pereira J, Cardoso JS. Symmetry-based regularization in deep breast cancer screening. Med Image Anal 2023; 83:102690. [PMID: 36446314 DOI: 10.1016/j.media.2022.102690] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 10/28/2022] [Accepted: 11/09/2022] [Indexed: 11/23/2022]
Abstract
Breast cancer is the most common and lethal form of cancer in women. Recent efforts have focused on developing accurate neural network-based computer-aided diagnosis systems for screening to help anticipate this disease. The ultimate goal is to reduce mortality and improve quality of life after treatment. Due to the difficulty in collecting and annotating data in this domain, data scarcity is - and will continue to be - a limiting factor. In this work, we present a unified view of different regularization methods that incorporate domain-known symmetries in the model. Three general strategies were followed: (i) data augmentation, (ii) invariance promotion in the loss function, and (iii) the use of equivariant architectures. Each of these strategies encodes different priors on the functions learned by the model and can be readily introduced in most settings. Empirically we show that the proposed symmetry-based regularization procedures improve generalization to unseen examples. This advantage is verified in different scenarios, datasets and model architectures. We hope that both the principle of symmetry-based regularization and the concrete methods presented can guide development towards more data-efficient methods for breast cancer screening as well as other medical imaging domains.
Collapse
Affiliation(s)
- Eduardo Castro
- INESC TEC, Campus da Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal; Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal.
| | - Jose Costa Pereira
- INESC TEC, Campus da Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal; Huawei Technologies R&D, Noah's Ark Lab, Gridiron building, 1 Pancras Square, 5th floor, London N1C 4AG, United Kingdom
| | - Jaime S Cardoso
- INESC TEC, Campus da Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal; Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
| |
Collapse
|
6
|
Hoang-Thi TN, Chassagnon G, Tran HD, Le-Dong NN, Dinh-Xuan AT, Revel MP. How Artificial Intelligence in Imaging Can Better Serve Patients with Bronchial and Parenchymal Lung Diseases? J Pers Med 2022; 12:jpm12091429. [PMID: 36143214 PMCID: PMC9505778 DOI: 10.3390/jpm12091429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 08/25/2022] [Accepted: 08/26/2022] [Indexed: 11/16/2022] Open
Abstract
With the rapid development of computing today, artificial intelligence has become an essential part of everyday life, with medicine and lung health being no exception. Big data-based scientific research does not mean simply gathering a large amount of data and letting the machines do the work by themselves. Instead, scientists need to identify problems whose solution will have a positive impact on patients’ care. In this review, we will discuss the role of artificial intelligence from both physiological and anatomical standpoints, starting with automatic quantitative assessment of anatomical structures using lung imaging and considering disease detection and prognosis estimation based on machine learning. The evaluation of current strengths and limitations will allow us to have a broader view for future developments.
Collapse
Affiliation(s)
- Trieu-Nghi Hoang-Thi
- Department of Diagnostic Imaging, Vinmec Healthcare System, Ho Chi Minh City 70000, Vietnam
| | - Guillaume Chassagnon
- AP-HP. Centre, Cochin Hospital, Department of Radiology, Université de Paris, 75005 Paris, France
| | - Hai-Dang Tran
- Department of Diagnostic Imaging, Vinmec Healthcare System, Ho Chi Minh City 70000, Vietnam
| | - Nhat-Nam Le-Dong
- AP-HP. Centre, Cochin Hospital, Department of Respiratory Physiology, Université de Paris, 75005 Paris, France
| | - Anh Tuan Dinh-Xuan
- AP-HP. Centre, Cochin Hospital, Department of Respiratory Physiology, Université de Paris, 75005 Paris, France
| | - Marie-Pierre Revel
- AP-HP. Centre, Cochin Hospital, Department of Radiology, Université de Paris, 75005 Paris, France
| |
Collapse
|
7
|
Yuan D, Zhang D, Yang Y, Yang S. Automatic construction of filter tree by genetic programming for ultrasound guidance image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103641] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
8
|
|
9
|
Bai J, Jin A, Wang T, Yang C, Nabavi S. Feature fusion siamese network for breast cancer detection comparing current and prior mammograms. Med Phys 2022; 49:3654-3669. [PMID: 35271746 DOI: 10.1002/mp.15598] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 02/08/2022] [Accepted: 03/01/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Automatic detection of very small and non-mass abnormalities from mammogram images has remained challenging. In clinical practice for each patient, radiologists commonly not only screen the mammogram images obtained during the examination, but also compare them with previous mammogram images to make a clinical decision. To design an AI system to mimic radiologists for better cancer detection, in this work we proposed an end-to-end enhanced Siamese convolutional neural network to detect breast cancer using previous year and current year mammogram images. METHODS The proposed Siamese based network uses high resolution mammogram images and fuses features of pairs of previous year and current year mammogram images to predict cancer probabilities. The proposed approach is developed based on the concept of one-shot learning that learns the abnormal differences between current and prior images instead of abnormal objects, and as a result can perform better with small sample size data sets. We developed two variants of the proposed network. In the first model, to fuse the features of current and previous images, we designed an enhanced distance learning network that considers not only the overall distance, but also the pixel-wise distances between the features. In the other model, we concatenated the features of current and previous images to fuse them. RESULTS We compared the performance of the proposed models with those of some baseline models that use current images only (ResNet and VGG) and also use current and prior images (LSTM and vanilla Siamese) in terms of accuracy, sensitivity, precision, F1 score and AUC. Results show that the proposed models outperform the baseline models and the proposed model with the distance learning network performs the best (accuracy: 0.92, sensitivity: 0.93, precision: 0.91, specificity: 0.91, F1: 0.92 and AUC: 0.95). CONCLUSIONS Integrating prior mammogram images improves automatic cancer classification, specially for very small and non-mass abnormalities. For classification models that integrate current and prior mammogram images, using an enhanced and effective distance learning network can advance the performance of the models. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Jun Bai
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Annie Jin
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Tianyu Wang
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Clifford Yang
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| |
Collapse
|
10
|
Zhou K, Li W, Zhao D. Deep learning-based breast region extraction of mammographic images combining pre-processing methods and semantic segmentation supported by Deeplab v3+. Technol Health Care 2022; 30:173-190. [PMID: 35124595 PMCID: PMC9028646 DOI: 10.3233/thc-228017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
BACKGROUND: Breast cancer has long been one of the major global life-threatening illnesses among women. Surgery and adjuvant therapy, coupled with early detection, could save many lives. This underscores the importance of mammography, a cost-effective and accurate method for early detection. Due to the poor contrast, noise and artifacts which results in difficulty for radiologists to diagnose, Computer-Aided Diagnosis (CAD) systems are hence developed. The extraction of breast region is a fundamental and crucial preparation step for further development of CAD systems. OBJECTIVE: The proposed method aims to extract breast region accurately from mammographic images where noise is suppressed, contrast is enhanced and pectoral muscle region is removed. METHODS: This paper presents a new deep learning-based breast region extraction method that combines pre-processing methods containing noise suppression using median filter, contrast enhancement using CLAHE and semantic segmentation using Deeplab v3+ model. RESULTS: The method is trained and evaluated on mini-MIAS dataset. It has also been evaluated on INbreast dataset. The results outperform those generated by other recent researches and are indicative of the capacity of the model to retain its accuracy and runtime advantage across different databases with different image resolutions. CONCLUSIONS: The proposed method shows state-of-the-art performance at extracting breast region from mammographic images. Wide range of evaluation on two commonly used mammography datasets proves the ability and adaptability of the method.
Collapse
Affiliation(s)
- Kuochen Zhou
- Corresponding author: Kuochen Zhou, School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning 110819, China. E-mail:
| | | | | |
Collapse
|
11
|
Li H, Chen D, Nailon WH, Davies ME, Laurenson DI. Dual Convolutional Neural Networks for Breast Mass Segmentation and Diagnosis in Mammography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3-13. [PMID: 34351855 DOI: 10.1109/tmi.2021.3102622] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Deep convolutional neural networks (CNNs) have emerged as a new paradigm for Mammogram diagnosis. Contemporary CNN-based computer-aided-diagnosis systems (CADs) for breast cancer directly extract latent features from input mammogram image and ignore the importance of morphological features. In this paper, we introduce a novel end-to-end deep learning framework for mammogram image processing, which computes mass segmentation and simultaneously predicts diagnosis results. Specifically, our method is constructed in a dual-path architecture that solves the mapping in a dual-problem manner, with an additional consideration of important shape and boundary knowledge. One path, called the Locality Preserving Learner (LPL), is devoted to hierarchically extracting and exploiting intrinsic features of the input. Whereas the other path, called the Conditional Graph Learner (CGL), focuses on generating geometrical features via modeling pixel-wise image to mask correlations. By integrating the two learners, both the cancer semantics and cancer representations are well learned, and the component learning paths in return complement each other, contributing an improvement to the mass segmentation and cancer classification problem at the same time. In addition, by integrating an automatic detection set-up, the DualCoreNet achieves fully automatic breast cancer diagnosis practically. Experimental results show that in benchmark DDSM dataset, DualCoreNet has outperformed other related works in both segmentation and classification tasks, achieving 92.27% DI coefficient and 0.85 AUC score. In another benchmark INbreast dataset, DualCoreNet achieves the best mammography segmentation (93.69% DI coefficient) and competitive classification performance (0.93 AUC score).
Collapse
|
12
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
13
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
14
|
Xiong S, Wu G, Fan X, Feng X, Huang Z, Cao W, Zhou X, Ding S, Yu J, Wang L, Shi Z. MRI-based brain tumor segmentation using FPGA-accelerated neural network. BMC Bioinformatics 2021; 22:421. [PMID: 34493208 PMCID: PMC8422637 DOI: 10.1186/s12859-021-04347-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 08/28/2021] [Indexed: 11/27/2022] Open
Abstract
Background Brain tumor segmentation is a challenging problem in medical image processing and analysis. It is a very time-consuming and error-prone task. In order to reduce the burden on physicians and improve the segmentation accuracy, the computer-aided detection (CAD) systems need to be developed. Due to the powerful feature learning ability of the deep learning technology, many deep learning-based methods have been applied to the brain tumor segmentation CAD systems and achieved satisfactory accuracy. However, deep learning neural networks have high computational complexity, and the brain tumor segmentation process consumes significant time. Therefore, in order to achieve the high segmentation accuracy of brain tumors and obtain the segmentation results efficiently, it is very demanding to speed up the segmentation process of brain tumors. Results Compared with traditional computing platforms, the proposed FPGA accelerator has greatly improved the speed and the power consumption. Based on the BraTS19 and BraTS20 dataset, our FPGA-based brain tumor segmentation accelerator is 5.21 and 44.47 times faster than the TITAN V GPU and the Xeon CPU. In addition, by comparing energy efficiency, our design can achieve 11.22 and 82.33 times energy efficiency than GPU and CPU, respectively. Conclusion We quantize and retrain the neural network for brain tumor segmentation and merge batch normalization layers to reduce the parameter size and computational complexity. The FPGA-based brain tumor segmentation accelerator is designed to map the quantized neural network model. The accelerator can increase the segmentation speed and reduce the power consumption on the basis of ensuring high accuracy which provides a new direction for the automatic segmentation and remote diagnosis of brain tumors.
Collapse
Affiliation(s)
- Siyu Xiong
- State Key Laboratory of ASIC an System, Fudan University, Shanghai, China
| | - Guoqing Wu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Xitian Fan
- School of Computer Science, Fudan University, Shanghai, China
| | - Xuan Feng
- State Key Laboratory of ASIC an System, Fudan University, Shanghai, China
| | - Zhongcheng Huang
- State Key Laboratory of ASIC an System, Fudan University, Shanghai, China
| | - Wei Cao
- State Key Laboratory of ASIC an System, Fudan University, Shanghai, China.
| | - Xuegong Zhou
- State Key Laboratory of ASIC an System, Fudan University, Shanghai, China
| | - Shijin Ding
- State Key Laboratory of ASIC an System, Fudan University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Lingli Wang
- State Key Laboratory of ASIC an System, Fudan University, Shanghai, China
| | - Zhifeng Shi
- Huashan Hospital Affiliated to Fudan University, Shanghai, China.
| |
Collapse
|
15
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
16
|
Makeev A, Toner B, Qian M, Badal A, Glick SJ. Using convolutional neural networks to discriminate between cysts and masses in Monte Carlo-simulated dual-energy mammography. Med Phys 2021; 48:4648-4655. [PMID: 34050965 DOI: 10.1002/mp.15005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 05/12/2021] [Accepted: 05/17/2021] [Indexed: 12/29/2022] Open
Abstract
PURPOSE A substantial percentage of recalls (up to 20%) in screening mammography is attributed to extended round lesions. Benign fluid-filled breast cysts often appear similar to solid tumors in conventional mammograms. Spectral imaging (dual-energy or photon-counting mammography) has been shown to discriminate between cysts and solid masses with clinically acceptable accuracy. This work explores the feasibility of using convolutional neural networks (CNNs) for this task. METHODS A series of Monte Carlo experiments was conducted with digital breast phantoms and embedded synthetic lesions to produce realistic dual-energy images of both lesion types. We considered such factors as nonuniform anthropomorphic background, size of the mass, breast compression thickness, and variability in lesion x-ray attenuation. These data then were used to train a deep neural network (ResNet-18) to learn the differences in x-ray attenuation of cysts and masses. RESULTS Our simulation results showed that the CNN-based classifier could reliably discriminate between cystic and solid mass round lesions in dual-energy images with an area under the receiver operating characteristic curve (ROC AUC) of 0.98 or greater. CONCLUSIONS The proposed approach showed promising performance and ease of implementation, and could be applied to novel photon-counting detector-based spectral mammography systems.
Collapse
Affiliation(s)
- Andrey Makeev
- Division of Imaging, Diagnostics, and Software Reliability, Office of Scientific and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food & Drug Administration, Silver Spring, MD, 20903, USA
| | - Brian Toner
- Program in Applied Mathematics, University of Arizona, Tucson, AZ, 85721, USA
| | - Marian Qian
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Andreu Badal
- Division of Imaging, Diagnostics, and Software Reliability, Office of Scientific and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food & Drug Administration, Silver Spring, MD, 20903, USA
| | - Stephen J Glick
- Division of Imaging, Diagnostics, and Software Reliability, Office of Scientific and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food & Drug Administration, Silver Spring, MD, 20903, USA
| |
Collapse
|
17
|
Li H, Ye J, Liu H, Wang Y, Shi B, Chen J, Kong A, Xu Q, Cai J. Application of deep learning in the detection of breast lesions with four different breast densities. Cancer Med 2021; 10:4994-5000. [PMID: 34132495 PMCID: PMC8290249 DOI: 10.1002/cam4.4042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/04/2021] [Accepted: 03/20/2021] [Indexed: 01/05/2023] Open
Abstract
Objective This retrospective study evaluated the model from populations with different breast densities and showed the model's performance on malignancy prediction. Methods A total of 608 mammograms were collected from Northern Jiangsu People's Hospital in Yangzhou City. The data from this province have not been used in the training or evaluation data set. The model consists of three submodules, lesion detection (Mask‐rcnn), lesion registration between craniocaudal view and mediolateral oblique view, malignancy prediction network (ResNet). The data set used to train the model was obtained from nine institutions across six cities. For normal cases, there were no annotations. Here, we adopted the free‐response receiver operating characteristic (FROC) curve as the indicator to evaluate the detection performance of all cancers and triple‐negative breast cancer (TNBC). The FROC curves are also shown for mass/distortion/asymmetry and typical benign calcification in two kinds of populations with four types of breast density. Results The sensitivity to mass/distortion/asymmetry for the four types of breast (A, B, C, D) are 0.94, 0.92, 0.89, and 0.72, respectively, when false positive per image is 0.25, while these values are 1.00, 0.95, 0.92, and 0.90, respectively, for the amorphous calcification lesions. The sensitivity for the cancer is 0.85 at the same false‐positive rate. The TNBC accounts for about 10%–20% of all breast cancers and is more aggressive with poor prognosis than other breast cancers. Herein, we also evaluated performance on the TNBC cases. Our results show that Yizhun AI could detect 75% TNBC lesions at the same false‐positive level mentioned above. Conclusion The Yizhun AI model used in our work has good diagnostic efficiency for different types of breast, even for the extremely dense breast. It has a guiding role in the clinical diagnosis of breast cancer. The performance of Yizhun AI on mass/distortion/asymmetry is affected by breast density significantly compared to that on amorphous calcification.
Collapse
Affiliation(s)
- Hongmei Li
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Jing Ye
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Hao Liu
- Yizhun Medical AI, Beijing, China
| | - Yichuan Wang
- Yizhun Medical AI, Beijing, China.,School of Electronics Engineering and Computer Science, Peking University, Beijing, China
| | - Binbin Shi
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Juan Chen
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Aiping Kong
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Qing Xu
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Junhui Cai
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| |
Collapse
|
18
|
A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11104573] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms therefore they are intrinsically unexplainable. This creates a barrier for clinical implementation due to lack of trust and transparency that is a characteristic of black box algorithms. Additionally, recent regulations prevent the implementation of unexplainable models in clinical settings which further demonstrates a need for explainability. To mitigate these concerns, there have been recent studies that attempt to overcome these issues by modifying deep learning architectures or providing after-the-fact explanations. A review of the deep learning explanation literature focused on cancer detection using MR images is presented here. The gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided.
Collapse
|
19
|
Jiang S, Li H, Jin Z. A Visually Interpretable Deep Learning Framework for Histopathological Image-Based Skin Cancer Diagnosis. IEEE J Biomed Health Inform 2021; 25:1483-1494. [PMID: 33449890 DOI: 10.1109/jbhi.2021.3052044] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Owing to the high incidence rate and the severe impact of skin cancer, the precise diagnosis of malignant skin tumors is a significant goal, especially considering treatment is normally effective if the tumor is detected early. Limited published histopathological image sets and the lack of an intuitive correspondence between the features of lesion areas and a certain type of skin cancer pose a challenge to the establishment of high-quality and interpretable computer-aided diagnostic (CAD) systems. To solve this problem, a light-weight attention mechanism-based deep learning framework, namely, DRANet, is proposed to differentiate 11 types of skin diseases based on a real histopathological image set collected by us during the last 10 years. The CAD system can output not only the name of a certain disease but also a visualized diagnostic report showing possible areas related to the disease. The experimental results demonstrate that the DRANet obtains significantly better performance than baseline models (i.e., InceptionV3, ResNet50, VGG16, and VGG19) with comparable parameter size and competitive accuracy with fewer model parameters. Visualized results produced by the hidden layers of the DRANet actually highlight part of the class-specific regions of diagnostic points and are valuable for decision making in the diagnosis of skin diseases.
Collapse
|
20
|
Shen T, Hao K, Gou C, Wang FY. Mass Image Synthesis in Mammogram with Contextual Information Based on GANs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:106019. [PMID: 33640650 DOI: 10.1016/j.cmpb.2021.106019] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 02/16/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE In medical imaging, the scarcity of labeled lesion data has hindered the application of many deep learning algorithms. To overcome this problem, the simulation of diverse lesions in medical images is proposed. However, synthesizing labeled mass images in mammograms is still challenging due to the lack of consistent patterns in shape, margin, and contextual information. Therefore, we aim to generate various labeled medical images based on contextual information in mammograms. METHODS In this paper, we propose a novel approach based on GANs to generate various mass images and then perform contextual infilling by inserting the synthetic lesions into healthy screening mammograms. Through incorporating features of both realistic mass images and corresponding masks into the adversarial learning scheme, the generator can not only learn the distribution of the real mass images but also capture the matching shape, margin and context information. RESULTS To demonstrate the effectiveness of our proposed method, we conduct experiments on publicly available mammogram database of DDSM and a private database provided by Nanfang Hospital in China. Qualitative and quantitative evaluations validate the effectiveness of our approach. Additionally, through the data augmentation by image generation of the proposed method, an improvement of 5.03% in detection rate can be achieved over the same model trained on original real lesion images. CONCLUSIONS The results show that the data augmentation based on our method increases the diversity of dataset. Our method can be viewed as one of the first steps toward generating labeled breast mass images for precise detection and can be extended in other medical imaging domains to solve similar problems.
Collapse
Affiliation(s)
- Tianyu Shen
- Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Kunkun Hao
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Chao Gou
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Guangzhou, China.
| | - Fei-Yue Wang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China; Qingdao Academy of Intelligent Industries, Qingdao, China; Institute of Systems Engineering, Macau University of Science and Technology, Macau, China
| |
Collapse
|
21
|
Differential diagnosis of ameloblastoma and odontogenic keratocyst by machine learning of panoramic radiographs. Int J Comput Assist Radiol Surg 2021; 16:415-422. [PMID: 33547985 PMCID: PMC7946691 DOI: 10.1007/s11548-021-02309-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 01/03/2021] [Indexed: 12/13/2022]
Abstract
Purpose The differentiation of the ameloblastoma and odontogenic keratocyst directly affects the formulation of surgical plans, while the results of differential diagnosis by imaging alone are not satisfactory. This paper aimed to propose an algorithm based on convolutional neural networks (CNN) structure to significantly improve the classification accuracy of these two tumors. Methods A total of 420 digital panoramic radiographs provided by 401 patients were acquired from the Shanghai Ninth People’s Hospital. Each of them was cropped to a patch as a region of interest by radiologists. Furthermore, inverse logarithm transformation and histogram equalization were employed to increase the contrast of the region of interest (ROI). To alleviate overfitting, random rotation and flip transform as data augmentation algorithms were adopted to the training dataset. We provided a CNN structure based on a transfer learning algorithm, which consists of two branches in parallel. The output of the network is a two-dimensional vector representing the predicted scores of ameloblastoma and odontogenic keratocyst, respectively. Results The proposed network achieved an accuracy of 90.36% (AUC = 0.946), while sensitivity and specificity were 92.88% and 87.80%, respectively. Two other networks named VGG-19 and ResNet-50 and a network trained from scratch were also used in the experiment, which achieved accuracy of 80.72%, 78.31%, and 69.88%, respectively. Conclusions We proposed an algorithm that significantly improves the differential diagnosis accuracy of ameloblastoma and odontogenic keratocyst and has the utility to provide a reliable recommendation to the oral maxillofacial specialists before surgery.
Collapse
|
22
|
Wang J, Zhu H, Wang SH, Zhang YD. A Review of Deep Learning on Medical Image Analysis. MOBILE NETWORKS AND APPLICATIONS 2021; 26:351-380. [DOI: 10.1007/s11036-020-01672-7] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 08/30/2023]
|
23
|
Nogales A, García-Tejedor ÁJ, Monge D, Vara JS, Antón C. A survey of deep learning models in medical therapeutic areas. Artif Intell Med 2021; 112:102020. [PMID: 33581832 DOI: 10.1016/j.artmed.2021.102020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 12/21/2020] [Accepted: 01/10/2021] [Indexed: 12/18/2022]
Abstract
Artificial intelligence is a broad field that comprises a wide range of techniques, where deep learning is presently the one with the most impact. Moreover, the medical field is an area where data both complex and massive and the importance of the decisions made by doctors make it one of the fields in which deep learning techniques can have the greatest impact. A systematic review following the Cochrane recommendations with a multidisciplinary team comprised of physicians, research methodologists and computer scientists has been conducted. This survey aims to identify the main therapeutic areas and the deep learning models used for diagnosis and treatment tasks. The most relevant databases included were MedLine, Embase, Cochrane Central, Astrophysics Data System, Europe PubMed Central, Web of Science and Science Direct. An inclusion and exclusion criteria were defined and applied in the first and second peer review screening. A set of quality criteria was developed to select the papers obtained after the second screening. Finally, 126 studies from the initial 3493 papers were selected and 64 were described. Results show that the number of publications on deep learning in medicine is increasing every year. Also, convolutional neural networks are the most widely used models and the most developed area is oncology where they are used mainly for image analysis.
Collapse
Affiliation(s)
- Alberto Nogales
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Álvaro J García-Tejedor
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Diana Monge
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Juan Serrano Vara
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Cristina Antón
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| |
Collapse
|
24
|
Deep-Learning-Based Computer-Aided Systems for Breast Cancer Imaging: A Critical Review. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10228298] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
This paper provides a critical review of the literature on deep learning applications in breast tumor diagnosis using ultrasound and mammography images. It also summarizes recent advances in computer-aided diagnosis/detection (CAD) systems, which make use of new deep learning methods to automatically recognize breast images and improve the accuracy of diagnoses made by radiologists. This review is based upon published literature in the past decade (January 2010–January 2020), where we obtained around 250 research articles, and after an eligibility process, 59 articles were presented in more detail. The main findings in the classification process revealed that new DL-CAD methods are useful and effective screening tools for breast cancer, thus reducing the need for manual feature extraction. The breast tumor research community can utilize this survey as a basis for their current and future studies.
Collapse
|
25
|
Lee J, Nishikawa RM. Cross-organ, cross-modality transfer learning: feasibility study for segmentation and classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:210194-210205. [PMID: 33680628 PMCID: PMC7935042 DOI: 10.1109/access.2020.3038909] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We conducted two analyses by comparing the transferability of a traditionally transfer-learned CNN (TL) to that of a CNN fine-tuned with an unrelated set of medical images (mammograms in this study) first and then fine-tuned a second time using TL, which we call the cross-organ, cross-modality transfer learned (XTL) network, on 1) multiple sclerosis (MS) segmentation of brain magnetic resonance (MR) images and 2) tumor malignancy classification of multi-parametric prostate MR images. We used 2133 screening mammograms and two public challenge datasets (longitudinal MS lesion segmentation and ProstateX) as intermediate and target datasets for XTL, respectively. We used two CNN architectures as basis networks for each analysis and fine-tuned it to match the target image types (volumetric) and tasks (segmentation and classification). We evaluated the XTL networks against the traditional TL networks using Dice coefficient and AUC as figure of merits for each analysis, respectively. For the segmentation test, XTL networks outperformed TL networks in terms of Dice coefficient (Dice coefficients of 0.72 vs [0.70 - 0.71] with p-value < 0.0001 in differences). For the classification test, XTL networks (AUCs = 0.77 - 0.80) outperformed TL networks (AUC = 0.73 - 0.75). The difference in the AUCs (AUCdiff = 0.045 - 0.047) was statistically significant (p-value < 0.03). We showed XTL using mammograms improves the network performance compared to traditional TL, despite the difference in image characteristics (x-ray vs. MRI and 2D vs. 3D) and imaging tasks (classification vs. segmentation for one of the tasks).
Collapse
Affiliation(s)
- Juhun Lee
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213 USA
| | | |
Collapse
|
26
|
Chen M, Li H, Wang J, Yuan W, Altaye M, Parikh NA, He L. Early Prediction of Cognitive Deficit in Very Preterm Infants Using Brain Structural Connectome With Transfer Learning Enhanced Deep Convolutional Neural Networks. Front Neurosci 2020; 14:858. [PMID: 33041749 PMCID: PMC7530168 DOI: 10.3389/fnins.2020.00858] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Accepted: 07/23/2020] [Indexed: 12/22/2022] Open
Abstract
Up to 40% of very preterm infants (≤32 weeks’ gestational age) were identified with a cognitive deficit at 2 years of age. Yet, accurate clinical diagnosis of cognitive deficit cannot be made until early childhood around 3–5 years of age. Recently, brain structural connectome that was constructed by advanced diffusion tensor imaging (DTI) technique has been playing an important role in understanding human cognitive functions. However, available annotated neuroimaging datasets with clinical and outcome information are usually limited and expensive to enlarge in the very preterm infants’ studies. These challenges hinder the development of neonatal prognostic tools for early prediction of cognitive deficit in very preterm infants. In this study, we considered the brain structural connectome as a 2D image and applied established deep convolutional neural networks to learn the spatial and topological information of the brain connectome. Furthermore, the transfer learning technique was utilized to mitigate the issue of insufficient training data. As such, we developed a transfer learning enhanced convolutional neural network (TL-CNN) model for early prediction of cognitive assessment at 2 years of age in very preterm infants using brain structural connectome. A total of 110 very preterm infants were enrolled in this work. Brain structural connectome was constructed using DTI images scanned at term-equivalent age. Bayley III cognitive assessments were conducted at 2 years of corrected age. We applied the proposed model to both cognitive deficit classification and continuous cognitive score prediction tasks. The results demonstrated that TL-CNN achieved improved performance compared to multiple peer models. Finally, we identified the brain regions most discriminative to the cognitive deficit. The results suggest that deep learning models may facilitate early prediction of later neurodevelopmental outcomes in very preterm infants at term-equivalent age.
Collapse
Affiliation(s)
- Ming Chen
- The Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Electronic Engineering and Computing Systems, University of Cincinnati, Cincinnati, OH, United States
| | - Hailong Li
- The Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Jinghua Wang
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Weihong Yuan
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, United States.,Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Mekbib Altaye
- Division of Biostatistics and Epidemiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Nehal A Parikh
- The Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Lili He
- The Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| |
Collapse
|
27
|
He L, Li H, Wang J, Chen M, Gozdas E, Dillman JR, Parikh NA. A multi-task, multi-stage deep transfer learning model for early prediction of neurodevelopment in very preterm infants. Sci Rep 2020; 10:15072. [PMID: 32934282 PMCID: PMC7492237 DOI: 10.1038/s41598-020-71914-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Accepted: 08/21/2020] [Indexed: 12/21/2022] Open
Abstract
Survivors following very premature birth (i.e., ≤ 32 weeks gestational age) remain at high risk for neurodevelopmental impairments. Recent advances in deep learning techniques have made it possible to aid the early diagnosis and prognosis of neurodevelopmental deficits. Deep learning models typically require training on large datasets, and unfortunately, large neuroimaging datasets with clinical outcome annotations are typically limited, especially in neonates. Transfer learning represents an important step to solve the fundamental problem of insufficient training data in deep learning. In this work, we developed a multi-task, multi-stage deep transfer learning framework using the fusion of brain connectome and clinical data for early joint prediction of multiple abnormal neurodevelopmental (cognitive, language and motor) outcomes at 2 years corrected age in very preterm infants. The proposed framework maximizes the value of both available annotated and non-annotated data in model training by performing both supervised and unsupervised learning. We first pre-trained a deep neural network prototype in a supervised fashion using 884 older children and adult subjects, and then re-trained this prototype using 291 neonatal subjects without supervision. Finally, we fine-tuned and validated the pre-trained model using 33 preterm infants. Our proposed model identified very preterm infants at high-risk for cognitive, language, and motor deficits at 2 years corrected age with an area under the receiver operating characteristic curve of 0.86, 0.66 and 0.84, respectively. Employing such a deep learning model, once externally validated, may facilitate risk stratification at term-equivalent age for early identification of long-term neurodevelopmental deficits and targeted early interventions to improve clinical outcomes in very preterm infants.
Collapse
Affiliation(s)
- Lili He
- The Perinatal Institute and Section of Neonatology, Perinatal and Pulmonary Biology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, MLC 7009, Cincinnati, OH, 45229, USA.
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
- Imaging Research Center, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, MLC 7009, Cincinnati, OH, 45229, USA.
| | - Hailong Li
- The Perinatal Institute and Section of Neonatology, Perinatal and Pulmonary Biology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, MLC 7009, Cincinnati, OH, 45229, USA
- Imaging Research Center, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, MLC 7009, Cincinnati, OH, 45229, USA
| | - Jinghua Wang
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Ming Chen
- The Perinatal Institute and Section of Neonatology, Perinatal and Pulmonary Biology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, MLC 7009, Cincinnati, OH, 45229, USA
- Department of Electronic Engineering and Computing Systems, University of Cincinnati, Cincinnati, OH, USA
- Imaging Research Center, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, MLC 7009, Cincinnati, OH, 45229, USA
| | - Elveda Gozdas
- Imaging Research Center, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, MLC 7009, Cincinnati, OH, 45229, USA
| | - Jonathan R Dillman
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
- Imaging Research Center, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, MLC 7009, Cincinnati, OH, 45229, USA
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, MLC 7009, Cincinnati, OH, 45229, USA
| | - Nehal A Parikh
- The Perinatal Institute and Section of Neonatology, Perinatal and Pulmonary Biology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, MLC 7009, Cincinnati, OH, 45229, USA.
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| |
Collapse
|
28
|
Tajaldeen A, Alghamdi S. Evaluation of radiologist's knowledge about the Artificial Intelligence in diagnostic radiology: a survey-based study. Acta Radiol Open 2020; 9:2058460120945320. [PMID: 32821436 PMCID: PMC7412626 DOI: 10.1177/2058460120945320] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Accepted: 07/02/2020] [Indexed: 12/31/2022] Open
Abstract
Background Advanced developments in diagnostic radiology have provided a rapid increase in the number of radiological investigations worldwide. Recently, Artificial Intelligence (AI) has been applied in diagnostic radiology. The purpose of developing such applications is to clinically validate and make them feasible for the current practice of diagnostic radiology, in which there is less time for diagnosis. Purpose To assess radiologists’ knowledge about AI’s role and establish a baseline to help in providing educational activities on AI in diagnostic radiology in Saudi Arabia. Material and Methods An online questionnaire was designed using QuestionPro software. The study was conducted in large hospitals located in different regions in Saudi Arabia. A total of 93 participants completed the questionnaire, of which 32 (34%) were trainee radiologists from year 1 to year 4 (R1–R4) of the residency programme, 33 (36%) were radiologists and fellows, and 28 (30%) were consultants. Results The responses to the question related to the use of AI on a daily basis illustrated that 76 (82%) of the participants were not using any AI software at all during daily interpretation of diagnostic images. Only 17 (18%) reported that they used AI software for diagnostic radiology. Conclusion There is a significant lack of knowledge about AI in our residency programme and radiology departments at hospitals. Due to the rapid development of AI and its application in diagnostic radiology, there is an urgent need to enhance awareness about its role in different diagnostic fields.
Collapse
Affiliation(s)
- Abdulrahman Tajaldeen
- Radiological Science Department, College of Applied Medical Science, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Salem Alghamdi
- Department of Medical Imaging and Radiation Sciences, Collage of Applied Medical Sciences, University of Jeddah, Jeddah, Saudi Arabia
| |
Collapse
|
29
|
Martín Noguerol T, Paulano-Godino F, Martín-Valdivia MT, Menias CO, Luna A. Strengths, Weaknesses, Opportunities, and Threats Analysis of Artificial Intelligence and Machine Learning Applications in Radiology. J Am Coll Radiol 2020; 16:1239-1247. [PMID: 31492401 DOI: 10.1016/j.jacr.2019.05.047] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 05/26/2019] [Accepted: 05/29/2019] [Indexed: 12/13/2022]
Abstract
Currently, the use of artificial intelligence (AI) in radiology, particularly machine learning (ML), has become a reality in clinical practice. Since the end of the last century, several ML algorithms have been introduced for a wide range of common imaging tasks, not only for diagnostic purposes but also for image acquisition and postprocessing. AI is now recognized to be a driving initiative in every aspect of radiology. There is growing evidence of the advantages of AI in radiology creating seamless imaging workflows for radiologists or even replacing radiologists. Most of the current AI methods have some internal and external disadvantages that are impeding their ultimate implementation in the clinical arena. As such, AI can be considered a portion of a business trying to be introduced in the health care market. For this reason, this review analyzes the current status of AI, and specifically ML, applied to radiology from the scope of strengths, weaknesses, opportunities, and threats (SWOT) analysis.
Collapse
Affiliation(s)
| | | | - María Teresa Martín-Valdivia
- SINAI Research Group, Computer Science Department, Advanced Studies Center in ICT (CEATIC), Universidad de Jaén, Jaén, Spain
| | | | - Antonio Luna
- MRI Unit, Radiology Department, Health Time, Jaén, Spain.
| |
Collapse
|
30
|
Yang W, Shi Y, Park SH, Yang M, Gao Y, Shen D. An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images. IEEE J Biomed Health Inform 2020; 24:2278-2291. [DOI: 10.1109/jbhi.2019.2960153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
31
|
Wong DJ, Gandomkar Z, Wu W, Zhang G, Gao W, He X, Wang Y, Reed W. Artificial intelligence and convolution neural networks assessing mammographic images: a narrative literature review. J Med Radiat Sci 2020; 67:134-142. [PMID: 32134206 PMCID: PMC7276180 DOI: 10.1002/jmrs.385] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 01/18/2020] [Accepted: 02/11/2020] [Indexed: 11/06/2022] Open
Abstract
Studies have shown that the use of artificial intelligence can reduce errors in medical image assessment. The diagnosis of breast cancer is an essential task; however, diagnosis can include 'detection' and 'interpretation' errors. Studies to reduce these errors have shown the feasibility of using convolution neural networks (CNNs). This narrative review presents recent studies in diagnosing mammographic malignancy investigating the accuracy and reliability of these CNNs. Databases including ScienceDirect, PubMed, MEDLINE, British Medical Journal and Medscape were searched using the terms 'convolutional neural network or artificial intelligence', 'breast neoplasms [MeSH] or breast cancer or breast carcinoma' and 'mammography [MeSH Terms]'. Articles collected were screened under the inclusion and exclusion criteria, accounting for the publication date and exclusive use of mammography images, and included only literature in English. After extracting data, results were compared and discussed. This review included 33 studies and identified four recurring categories of studies: the differentiation of benign and malignant masses, the localisation of masses, cancer-containing and cancer-free breast tissue differentiation and breast classification based on breast density. CNN's application in detecting malignancy in mammography appears promising but requires further standardised investigations before potentially becoming an integral part of the diagnostic routine in mammography.
Collapse
Affiliation(s)
- Dennis Jay Wong
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Ziba Gandomkar
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Wan‐Jing Wu
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Guijing Zhang
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Wushuang Gao
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Xiaoying He
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Yunuo Wang
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Warren Reed
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| |
Collapse
|
32
|
Acharya J, Basu A. Deep Neural Network for Respiratory Sound Classification in Wearable Devices Enabled by Patient Specific Model Tuning. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2020; 14:535-544. [PMID: 32191898 DOI: 10.1109/tbcas.2020.2981172] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The primary objective of this paper is to build classification models and strategies to identify breathing sound anomalies (wheeze, crackle) for automated diagnosis of respiratory and pulmonary diseases. In this work we propose a deep CNN-RNN model that classifies respiratory sounds based on Mel-spectrograms. We also implement a patient specific model tuning strategy that first screens respiratory patients and then builds patient specific classification models using limited patient data for reliable anomaly detection. Moreover, we devise a local log quantization strategy for model weights to reduce the memory footprint for deployment in memory constrained systems such as wearable devices. The proposed hybrid CNN-RNN model achieves a score of [Formula: see text] on four-class classification of breathing cycles for ICBHI'17 scientific challenge respiratory sound database. When the model is re-trained with patient specific data, it produces a score of [Formula: see text] for leave-one-out validation. The proposed weight quantization technique achieves ≈ 4 × reduction in total memory cost without loss of performance. The main contribution of the paper is as follows: Firstly, the proposed model is able to achieve state of the art score on the ICBHI'17 dataset. Secondly, deep learning models are shown to successfully learn domain specific knowledge when pre-trained with breathing data and produce significantly superior performance compared to generalized models. Finally, local log quantization of trained weights is shown to be able to reduce the memory requirement significantly. This type of patient-specific re-training strategy can be very useful in developing reliable long-term automated patient monitoring systems particularly in wearable healthcare solutions.
Collapse
|
33
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Ravi K. Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | | |
Collapse
|
34
|
Tanaka H, Chiu SW, Watanabe T, Kaoku S, Yamaguchi T. Computer-aided diagnosis system for breast ultrasound images using deep learning. Phys Med Biol 2019; 64:235013. [PMID: 31645021 DOI: 10.1088/1361-6560/ab5093] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The purpose of this study was to develop a computer-aided diagnosis (CAD) system for the classification of malignant and benign masses in the breast using ultrasonography based on a convolutional neural network (CNN), a state-of-the-art deep learning technique. We explored the regions for the correct classification by generating a heat map that presented the important regions used by the CNN for human malignancy/benign classification. Clinical data was obtained from a large-scale clinical trial previously conducted by the Japan Association of Breast and Thyroid Sonology. Images of 1536 breast masses (897 malignant and 639 benign) confirmed by pathological examinations were collected, with each breast mass captured from various angles using an ultrasound (US) imaging probe. We constructed an ensemble network by combining two CNN models (VGG19 and ResNet152) fine-tuned on balanced training data with augmentation and used the mass-level classification method to enable the CNN to classify a given mass using all views. For an independent test set consisting of 154 masses (77 malignant and 77 benign), our network showed outstanding classification performance with a sensitivity of 90.9% (95% confidence interval 84.5-97.3), a specificity of 87.0% (79.5-94.5), and area under the curve (AUC) of 0.951 (0.916-0.987) compared to that of the two CNN models. In addition, our study indicated that the breast masses themselves were not detected by the CNN as important regions for correct mass classification. Collectively, this CNN-based CAD system is expected to assist doctors by improving the diagnosis of breast cancer in clinical practice.
Collapse
Affiliation(s)
- Hiroki Tanaka
- Division of Biostatistics, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan. Author to whom any correspondence should be addressed
| | | | | | | | | |
Collapse
|
35
|
Leite CDC. Artificial intelligence, radiology, precision medicine, and personalized medicine. Radiol Bras 2019; 52:VII-VIII. [PMID: 32047342 PMCID: PMC7007059 DOI: 10.1590/0100-3984.2019.52.6e2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Affiliation(s)
- Claudia da Costa Leite
- Department of Radiology and Oncology of Faculdade de Medicina da Universidade de São Paulo (FMUSP), Laboratório Fleury, and Hospital Sírio-Libanês, São Paulo, SP, Brazil. E-mail:
| |
Collapse
|
36
|
Shen T, Gou C, Wang FY, He Z, Chen W. Learning from adversarial medical images for X-ray breast mass segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 180:105012. [PMID: 31421601 DOI: 10.1016/j.cmpb.2019.105012] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 07/06/2019] [Accepted: 08/03/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Simulation of diverse lesions in images is proposed and applied to overcome the scarcity of labeled data, which has hindered the application of deep learning in medical imaging. However, most of current studies focus on generating samples with class labels for classification and detection rather than segmentation, because generating images with precise masks remains a challenge. Therefore, we aim to generate realistic medical images with precise masks for improving lesion segmentation in mammagrams. METHODS In this paper, we propose a new framework for improving X-ray breast mass segmentation performance aided by generated adversarial lesion images with precise masks. Firstly, we introduce a conditional generative adversarial network (cGAN) to learn the distribution of real mass images as well as a mapping between images and corresponding segmentation masks. Subsequently, a number of lesion images are generated from various binary input masks using the generator in the trained cGAN. Then the generated adversarial samples are concatenated with original samples to produce a dataset with increased diversity. Furthermore, we introduce an improved U-net and train it on the previous augmented dataset for breast mass segmentation. RESULTS To demonstrate the effectiveness of our proposed method, we conduct experiments on publicly available mammogram database of INbreast and a private database provided by Nanfang Hospital in China. Experimental results show that an improvement up to 7% in Jaccard index can be achieved over the same model trained on original real lesion images. CONCLUSIONS Our proposed method can be viewed as one of the first steps toward generating realistic X-ray breast mass images with masks for precise segmentation.
Collapse
Affiliation(s)
- Tianyu Shen
- Institute of Automation, Chinese Academy of Sciences, Zhongguancun East Road 95, Beijing 100190, China; Qingdao Academy of Intelligent Industries, Zhilidao Road 1, Qingdao 266000, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chao Gou
- School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510275, China.
| | - Fei-Yue Wang
- Institute of Automation, Chinese Academy of Sciences, Zhongguancun East Road 95, Beijing 100190, China; Qingdao Academy of Intelligent Industries, Zhilidao Road 1, Qingdao 266000, China
| | - Zilong He
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Weiguo Chen
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
37
|
|
38
|
Abdelhafiz D, Yang C, Ammar R, Nabavi S. Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinformatics 2019; 20:281. [PMID: 31167642 PMCID: PMC6551243 DOI: 10.1186/s12859-019-2823-4] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND The limitations of traditional computer-aided detection (CAD) systems for mammography, the extreme importance of early detection of breast cancer and the high impact of the false diagnosis of patients drive researchers to investigate deep learning (DL) methods for mammograms (MGs). Recent breakthroughs in DL, in particular, convolutional neural networks (CNNs) have achieved remarkable advances in the medical fields. Specifically, CNNs are used in mammography for lesion localization and detection, risk assessment, image retrieval, and classification tasks. CNNs also help radiologists providing more accurate diagnosis by delivering precise quantitative analysis of suspicious lesions. RESULTS In this survey, we conducted a detailed review of the strengths, limitations, and performance of the most recent CNNs applications in analyzing MG images. It summarizes 83 research studies for applying CNNs on various tasks in mammography. It focuses on finding the best practices used in these research studies to improve the diagnosis accuracy. This survey also provides a deep insight into the architecture of CNNs used for various tasks. Furthermore, it describes the most common publicly available MG repositories and highlights their main features and strengths. CONCLUSIONS The mammography research community can utilize this survey as a basis for their current and future studies. The given comparison among common publicly available MG repositories guides the community to select the most appropriate database for their application(s). Moreover, this survey lists the best practices that improve the performance of CNNs including the pre-processing of images and the use of multi-view images. In addition, other listed techniques like transfer learning (TL), data augmentation, batch normalization, and dropout are appealing solutions to reduce overfitting and increase the generalization of the CNN models. Finally, this survey identifies the research challenges and directions that require further investigations by the community.
Collapse
Affiliation(s)
- Dina Abdelhafiz
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
- The Informatics Research Institute (IRI), City of Scientific Research and Technological Application (SRTA-City), New Borg El-Arab, Egypt
| | - Clifford Yang
- Department of Diagnostic Imaging, University of Connecticut Health Center, Farmington, 06030 CT USA
| | - Reda Ammar
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| |
Collapse
|
39
|
Houssami N, Kirkpatrick-Jones G, Noguchi N, Lee CI. Artificial Intelligence (AI) for the early detection of breast cancer: a scoping review to assess AI's potential in breast screening practice. Expert Rev Med Devices 2019; 16:351-362. [PMID: 30999781 DOI: 10.1080/17434440.2019.1610387] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
INTRODUCTION Various factors are driving interest in the application of artificial intelligence (AI) for breast cancer (BC) detection, but it is unclear whether the evidence warrants large-scale use in population-based screening. AREAS COVERED We performed a scoping review, a structured evidence synthesis describing a broad research field, to summarize knowledge on AI evaluated for BC detection and to assess AI's readiness for adoption in BC screening. Studies were predominantly small retrospective studies based on highly selected image datasets that contained a high proportion of cancers (median BC proportion in datasets 26.5%), and used heterogeneous techniques to develop AI models; the range of estimated AUC (area under ROC curve) for AI models was 69.2-97.8% (median AUC 88.2%). We identified various methodologic limitations including use of non-representative imaging data for model training, limited validation in external datasets, potential bias in training data, and few comparative data for AI versus radiologists' interpretation of mammography screening. EXPERT OPINION Although contemporary AI models have reported generally good accuracy for BC detection, methodological concerns, and evidence gaps exist that limit translation into clinical BC screening settings. These should be addressed in parallel to advancing AI techniques to render AI transferable to large-scale population-based screening.
Collapse
Affiliation(s)
- Nehmat Houssami
- a The University of Sydney, Faculty of Medicine and Health , Sydney School of Public Health (A27) , Sydney , Australia
| | - Georgia Kirkpatrick-Jones
- a The University of Sydney, Faculty of Medicine and Health , Sydney School of Public Health (A27) , Sydney , Australia
| | - Naomi Noguchi
- a The University of Sydney, Faculty of Medicine and Health , Sydney School of Public Health (A27) , Sydney , Australia
| | - Christoph I Lee
- b Department of Radiology , University of Washington School of Medicine , Seattle , WA , USA.,c Department of Health Services , University of Washington School of Public Health , Seattle , WA , USA.,d Hutchinson Institute for Cancer Outcomes Research , Seattle , WA , USA
| |
Collapse
|
40
|
Brunetti A, Carnimeo L, Trotta GF, Bevilacqua V. Computer-assisted frameworks for classification of liver, breast and blood neoplasias via neural networks: A survey based on medical images. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.06.080] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
41
|
Zhang YD, Dong Z, Chen X, Jia W, Du S, Muhammad K, Wang SH. Image based fruit category classification by 13-layer deep convolutional neural network and data augmentation. MULTIMEDIA TOOLS AND APPLICATIONS 2019; 78:3613-3632. [DOI: 10.1007/s11042-017-5243-3] [Citation(s) in RCA: 78] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Revised: 08/16/2017] [Accepted: 09/20/2017] [Indexed: 08/30/2023]
|
42
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 266] [Impact Index Per Article: 53.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
43
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2018; 46:e1-e36. [PMID: 30367497 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 372] [Impact Index Per Article: 62.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer-aided Diagnosis Lab, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD, 20892-1182, USA
| | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, IL, 60637, USA
| | - Kenny H Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-aided Diagnosis Lab, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD, 20892-1182, USA
| | - Maryellen L Giger
- Department of Radiology, University of Chicago, Chicago, IL, 60637, USA
| |
Collapse
|
44
|
Deep-learning Classifier With an Ultrawide-field Scanning Laser Ophthalmoscope Detects Glaucoma Visual Field Severity. J Glaucoma 2018; 27:647-652. [DOI: 10.1097/ijg.0000000000000988] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
45
|
Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018; 98:126-146. [PMID: 29787940 DOI: 10.1016/j.compbiomed.2018.05.018] [Citation(s) in RCA: 149] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 05/15/2018] [Accepted: 05/15/2018] [Indexed: 12/17/2022]
Abstract
More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications.
Collapse
Affiliation(s)
- Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France.
| | | | | | | |
Collapse
|
46
|
Burt JR, Torosdagli N, Khosravan N, RaviPrakash H, Mortazi A, Tissavirasingham F, Hussein S, Bagci U. Deep learning beyond cats and dogs: recent advances in diagnosing breast cancer with deep neural networks. Br J Radiol 2018; 91:20170545. [PMID: 29565644 DOI: 10.1259/bjr.20170545] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Deep learning has demonstrated tremendous revolutionary changes in the computing industry and its effects in radiology and imaging sciences have begun to dramatically change screening paradigms. Specifically, these advances have influenced the development of computer-aided detection and diagnosis (CAD) systems. These technologies have long been thought of as "second-opinion" tools for radiologists and clinicians. However, with significant improvements in deep neural networks, the diagnostic capabilities of learning algorithms are approaching levels of human expertise (radiologists, clinicians etc.), shifting the CAD paradigm from a "second opinion" tool to a more collaborative utility. This paper reviews recently developed CAD systems based on deep learning technologies for breast cancer diagnosis, explains their superiorities with respect to previously established systems, defines the methodologies behind the improved achievements including algorithmic developments, and describes remaining challenges in breast cancer screening and diagnosis. We also discuss possible future directions for new CAD models that continue to change as artificial intelligence algorithms evolve.
Collapse
Affiliation(s)
- Jeremy R Burt
- 1 Department of Radiology, Florida Hospital , Orlando, FL , USA.,2 Department of Computer Science, Center for Research in Computer Vision, University of Central Florida (UCF) , Orlando, FL , USA
| | - Neslisah Torosdagli
- 2 Department of Computer Science, Center for Research in Computer Vision, University of Central Florida (UCF) , Orlando, FL , USA
| | - Naji Khosravan
- 2 Department of Computer Science, Center for Research in Computer Vision, University of Central Florida (UCF) , Orlando, FL , USA
| | - Harish RaviPrakash
- 2 Department of Computer Science, Center for Research in Computer Vision, University of Central Florida (UCF) , Orlando, FL , USA
| | - Aliasghar Mortazi
- 2 Department of Computer Science, Center for Research in Computer Vision, University of Central Florida (UCF) , Orlando, FL , USA
| | | | - Sarfaraz Hussein
- 2 Department of Computer Science, Center for Research in Computer Vision, University of Central Florida (UCF) , Orlando, FL , USA
| | - Ulas Bagci
- 2 Department of Computer Science, Center for Research in Computer Vision, University of Central Florida (UCF) , Orlando, FL , USA
| |
Collapse
|
47
|
Collado-Mesa F, Alvarez E, Arheart K. The Role of Artificial Intelligence in Diagnostic Radiology: A Survey at a Single Radiology Residency Training Program. J Am Coll Radiol 2018; 15:1753-1757. [PMID: 29477289 DOI: 10.1016/j.jacr.2017.12.021] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 12/13/2017] [Accepted: 12/15/2017] [Indexed: 12/20/2022]
Abstract
PURPOSE Advances in artificial intelligence applied to diagnostic radiology are predicted to have a major impact on this medical specialty. With the goal of establishing a baseline upon which to build educational activities on this topic, a survey was conducted among trainees and attending radiologists at a single residency program. METHODS An anonymous questionnaire was distributed. Comparisons of categorical data between groups (trainees and attending radiologists) were made using Pearson χ2 analysis or an exact analysis when required. Comparisons were made using the Wilcoxon rank sum test when the data were not normally distributed. An α level of 0.05 was used. RESULTS The overall response rate was 66% (69 of 104). Thirty-six percent of participants (n = 25) reported not having read a scientific medical article on the topic of artificial intelligence during the past 12 months. Twenty-nine percent of respondents (n = 12) reported using artificial intelligence tools during their daily work. Trainees were more likely to express doubts on whether they would have pursued diagnostic radiology as a career had they known of the potential impact artificial intelligence is predicted to have on the specialty (P = .0254) and were also more likely to plan to learn about the topic (P = .0401). CONCLUSIONS Radiologists lack exposure to current scientific medical articles on artificial intelligence. Trainees are concerned by the implications artificial intelligence may have on their jobs and desire to learn about the topic. There is a need to develop educational resources to help radiologists assume an active role in guiding and facilitating the development and implementation of artificial intelligence tools in diagnostic radiology.
Collapse
Affiliation(s)
- Fernando Collado-Mesa
- Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida.
| | - Edilberto Alvarez
- Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida
| | - Kris Arheart
- Department of Public Health Sciences, University of Miami Miller School of Medicine, Miami, Florida
| |
Collapse
|
48
|
Deep Learning for Medical Image Processing: Overview, Challenges and the Future. LECTURE NOTES IN COMPUTATIONAL VISION AND BIOMECHANICS 2018. [DOI: 10.1007/978-3-319-65981-7_12] [Citation(s) in RCA: 369] [Impact Index Per Article: 61.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
49
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4291] [Impact Index Per Article: 613.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
50
|
Suzuki K. Overview of deep learning in medical imaging. Radiol Phys Technol 2017; 10:257-273. [PMID: 28689314 DOI: 10.1007/s12194-017-0406-5] [Citation(s) in RCA: 364] [Impact Index Per Article: 52.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 06/29/2017] [Indexed: 02/07/2023]
Abstract
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Collapse
Affiliation(s)
- Kenji Suzuki
- Medical Imaging Research Center and Department of Electrical and Computer Engineering, Illinois Institute of Technology, 3440 South Dearborn Street, Chicago, IL, 60616, USA. .,World Research Hub Initiative (WRHI), Tokyo Institute of Technology, Tokyo, Japan.
| |
Collapse
|