1
|
Wu Y, Wu J, Dou Y, Rubert N, Wang Y, Deng J. A deep learning fusion model with evidence-based confidence level analysis for differentiation of malignant and benign breast tumors using dynamic contrast enhanced MRI. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
2
|
Li J, Liu J, Wang Y, He Y, Liu K, Raghunathan R, Shen SS, He T, Yu X, Danforth R, Zheng F, Zhao H, Wong STC. Artificial intelligence-augmented, label-free molecular imaging method for tissue identification, cancer diagnosis, and cancer margin detection. BIOMEDICAL OPTICS EXPRESS 2021; 12:5559-5582. [PMID: 34692201 PMCID: PMC8515981 DOI: 10.1364/boe.428738] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 06/17/2021] [Accepted: 06/28/2021] [Indexed: 06/13/2023]
Abstract
Label-free high-resolution molecular and cellular imaging strategies for intraoperative use are much needed, but not yet available. To fill this void, we developed an artificial intelligence-augmented molecular vibrational imaging method that integrates label-free and subcellular-resolution coherent anti-stokes Raman scattering (CARS) imaging with real-time quantitative image analysis via deep learning (artificial intelligence-augmented CARS or iCARS). The aim of this study was to evaluate the capability of the iCARS system to identify and differentiate the parathyroid gland and recurrent laryngeal nerve (RLN) from surrounding tissues and detect cancer margins. This goal was successfully met.
Collapse
Affiliation(s)
- Jiasong Li
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
- These authors contributed equally to this work
| | - Jun Liu
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
- Department of Breast-thyroid-vascular Surgery, Shanghai General Hospital, Shanghai Jiao Tong University, 201620, Shanghai, China
- These authors contributed equally to this work
| | - Ye Wang
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
- Department of Breast-thyroid-vascular Surgery, Shanghai General Hospital, Shanghai Jiao Tong University, 201620, Shanghai, China
- These authors contributed equally to this work
| | - Yunjie He
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Kai Liu
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Raksha Raghunathan
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Steven S. Shen
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Tiancheng He
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Xiaohui Yu
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Rebecca Danforth
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Feibi Zheng
- Department of Surgery, Houston Methodist Hospital, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Hong Zhao
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Stephen T. C. Wong
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Weill Cornell Medicine, Houston, TX 77030, USA
- Department of Radiology, Houston Methodist Hospital, Weill Cornell Medicine, Houston, TX 77030, USA
| |
Collapse
|
3
|
Muhammad K, Khan S, Ser JD, Albuquerque VHCD. Deep Learning for Multigrade Brain Tumor Classification in Smart Healthcare Systems: A Prospective Survey. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:507-522. [PMID: 32603291 DOI: 10.1109/tnnls.2020.2995800] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain tumor is one of the most dangerous cancers in people of all ages, and its grade recognition is a challenging problem for radiologists in health monitoring and automated diagnosis. Recently, numerous methods based on deep learning have been presented in the literature for brain tumor classification (BTC) in order to assist radiologists for a better diagnostic analysis. In this overview, we present an in-depth review of the surveys published so far and recent deep learning-based methods for BTC. Our survey covers the main steps of deep learning-based BTC methods, including preprocessing, features extraction, and classification, along with their achievements and limitations. We also investigate the state-of-the-art convolutional neural network models for BTC by performing extensive experiments using transfer learning with and without data augmentation. Furthermore, this overview describes available benchmark data sets used for the evaluation of BTC. Finally, this survey does not only look into the past literature on the topic but also steps on it to delve into the future of this area and enumerates some research directions that should be followed in the future, especially for personalized and smart healthcare.
Collapse
|
4
|
Pathak P, Jalal AS, Rai R. Breast Cancer Image Classification: A Review. Curr Med Imaging 2020; 17:720-740. [PMID: 33371857 DOI: 10.2174/0929867328666201228125208] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 09/23/2020] [Accepted: 10/14/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Breast cancer represents uncontrolled breast cell growth. Breast cancer is the most diagnosed cancer in women worldwide. Early detection of breast cancer improves the chances of survival and increases treatment options. There are various methods for screening breast cancer, such as mammogram, ultrasound, computed tomography and Magnetic Resonance Imaging (MRI). MRI is gaining prominence as an alternative screening tool for early detection and breast cancer diagnosis. Nevertheless, MRI can hardly be examined without the use of a Computer-Aided Diagnosis (CAD) framework, due to the vast amount of data. OBJECTIVE This paper aims to cover the approaches used in the CAD system for the detection of breast cancer. METHODS In this paper, the methods used in CAD systems are categories into two classes: the conventional approach and artificial intelligence (AI) approach. RESULTS The conventional approach covers the basic steps of image processing, such as preprocessing, segmentation, feature extraction and classification. The AI approach covers the various convolutional and deep learning networks used for diagnosis. CONCLUSION This review discusses some of the core concepts used in breast cancer and presents a comprehensive review of efforts in the past to address this problem.
Collapse
Affiliation(s)
- Pooja Pathak
- Department of Mathematics, GLA University, Mathura, India
| | - Anand Singh Jalal
- Department of Computer Engineering & Applications, GLA University, Mathura, India
| | - Ritu Rai
- Department of Computer Engineering & Applications, GLA University, Mathura, India
| |
Collapse
|
5
|
Wang Y, Huang F, Zhang Y, Zhang R, Lei B, Wang T. Breast Cancer Image Classification via Multi-level Dual-network Features and Sparse Multi-Relation Regularized Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:7023-7026. [PMID: 31947455 DOI: 10.1109/embc.2019.8857762] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Breast cancer is one of the leading causes of cancer death worldwide. Recently, the computer-aided diagnosis and detection technique has been developed for the early diagnosis of breast cancer, but the diagnostic efficiency has still been a challenging issue. For this reason, we aim to improve the breast cancer diagnostic accuracy and reduce the workload of doctors in this paper by devising a deep learning framework based on histological image. Therefore, we develop a model of multi-level feature of dual-network combined with sparse multi-relation regularized learning method, which enhances the classification performance and robustness. Specifically, first, we preprocess the histological images using scale transformation and color enhancement methods. Second, the multi-level features are extracted from preprocessed images using InceptionV3-ML and ResNet-50 networks. Third, the feature selection method via sparse multi-relation regularization is further developed for performance boosting and overfitting reduction. We evaluate the proposed method based on the public ICIAR 2018 Challenge dataset of breast cancer histology images. Experimental results show that our method has achieved promising performance and outperformed the related works.
Collapse
|
6
|
Zhou J, Zhang Y, Chang KT, Lee KE, Wang O, Li J, Lin Y, Pan Z, Chang P, Chow D, Wang M, Su MY. Diagnosis of Benign and Malignant Breast Lesions on DCE-MRI by Using Radiomics and Deep Learning With Consideration of Peritumor Tissue. J Magn Reson Imaging 2019; 51:798-809. [PMID: 31675151 DOI: 10.1002/jmri.26981] [Citation(s) in RCA: 113] [Impact Index Per Article: 22.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2019] [Revised: 10/10/2019] [Accepted: 10/11/2019] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Computer-aided methods have been widely applied to diagnose lesions detected on breast MRI, but fully-automatic diagnosis using deep learning is rarely reported. PURPOSE To evaluate the diagnostic accuracy of mass lesions using region of interest (ROI)-based, radiomics and deep-learning methods, by taking peritumor tissues into consideration. STUDY TYPE Retrospective. POPULATION In all, 133 patients with histologically confirmed 91 malignant and 62 benign mass lesions for training (74 patients with 48 malignant and 26 benign lesions for testing). FIELD STRENGTH/SEQUENCE 3T, using the volume imaging for breast assessment (VIBRANT) dynamic contrast-enhanced (DCE) sequence. ASSESSMENT 3D tumor segmentation was done automatically by using fuzzy-C-means algorithm with connected-component labeling. A total of 99 texture and histogram parameters were calculated for each case, and 15 were selected using random forest to build a radiomics model. Deep learning was implemented using ResNet50, evaluated with 10-fold crossvalidation. The tumor alone, smallest bounding box, and 1.2, 1.5, 2.0 times enlarged boxes were used as inputs. STATISTICAL TESTS The malignancy probability was calculated using each model, and the threshold of 0.5 was used to make a diagnosis. RESULTS In the training dataset, the diagnostic accuracy was 76% using three ROI-based parameters, 84% using the radiomics model, and 86% using ROI + radiomics model. In deep learning using the per-slice basis, the area under the receiver operating characteristic (ROC) was comparable for tumor alone, smallest and 1.2 times box (AUC = 0.97-0.99), which were significantly higher than 1.5 and 2.0 times box (AUC = 0.86 and 0.71, respectively). For per-lesion diagnosis, the highest accuracy of 91% was achieved when using the smallest bounding box, and that decreased to 84% for tumor alone and 1.2 times box, and further to 73% for 1.5 times box and 69% for 2.0 times box. In the independent testing dataset, the per-lesion diagnostic accuracy was also the highest when using the smallest bounding box, 89%. DATA CONCLUSION Deep learning using ResNet50 achieved a high diagnostic accuracy. Using the smallest bounding box containing proximal peritumor tissue as input had higher accuracy compared to using tumor alone or larger boxes. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 2.
Collapse
Affiliation(s)
- Jiejie Zhou
- Department of Radiology, First Affiliate Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yang Zhang
- Department of Radiological Sciences, University of California, Irvine, California, USA
| | - Kai-Ting Chang
- Department of Radiological Sciences, University of California, Irvine, California, USA
| | - Kyoung Eun Lee
- Department of Radiology, Inje University Seoul Paik Hospital, Inje University, Seoul, Korea
| | - Ouchen Wang
- Department of Thyroid and Breast Surgery, First Affiliate Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jiance Li
- Department of Radiology, First Affiliate Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yezhi Lin
- Information Technology Center, First Affiliate Hospital of Wenzhou Medical University, Wenzhou, China
| | - Zhifang Pan
- Information Technology Center, First Affiliate Hospital of Wenzhou Medical University, Wenzhou, China
| | - Peter Chang
- Department of Radiological Sciences, University of California, Irvine, California, USA
| | - Daniel Chow
- Department of Radiological Sciences, University of California, Irvine, California, USA
| | - Meihao Wang
- Department of Radiology, First Affiliate Hospital of Wenzhou Medical University, Wenzhou, China
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, California, USA
| |
Collapse
|
7
|
Murtaza G, Shuib L, Abdul Wahab AW, Mujtaba G, Mujtaba G, Nweke HF, Al-garadi MA, Zulfiqar F, Raza G, Azmi NA. Deep learning-based breast cancer classification through medical imaging modalities: state of the art and research challenges. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09716-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
8
|
Truhn D, Schrading S, Haarburger C, Schneider H, Merhof D, Kuhl C. Radiomic versus Convolutional Neural Networks Analysis for Classification of Contrast-enhancing Lesions at Multiparametric Breast MRI. Radiology 2018; 290:290-297. [PMID: 30422086 DOI: 10.1148/radiol.2018181352] [Citation(s) in RCA: 133] [Impact Index Per Article: 22.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Purpose To compare the diagnostic performance of radiomic analysis (RA) and a convolutional neural network (CNN) to radiologists for classification of contrast agent-enhancing lesions as benign or malignant at multiparametric breast MRI. Materials and Methods Between August 2011 and August 2015, 447 patients with 1294 enhancing lesions (787 malignant, 507 benign; median size, 15 mm ± 20) were evaluated. Lesions were manually segmented by one breast radiologist. RA was performed by using L1 regularization and principal component analysis. CNN used a deep residual neural network with 34 layers. All algorithms were also retrained on half the number of lesions (n = 647). Machine interpretations were compared with prospective interpretations by three breast radiologists. Standard of reference was histologic analysis or follow-up. Areas under the receiver operating curve (AUCs) were used to compare diagnostic performance. Results CNN trained on the full cohort was superior to training on the half-size cohort (AUC, 0.88 vs 0.83, respectively; P = .01), but there was no difference for RA and L1 regularization (AUC, 0.81 vs 0.80, respectively; P = .76) or RA and principal component analysis (AUC, 0.78 vs 0.78, respectively; P = .93). By using the full cohort, CNN performance (AUC, 0.88; 95% confidence interval: 0.86, 0.89) was better than RA and L1 regularization (AUC, 0.81; 95% confidence interval: 0.79, 0.83; P < .001) and RA and principal component analysis (AUC, 0.78; 95% confidence interval: 0.76, 0.80; P < .001). However, CNN was inferior to breast radiologist interpretation (AUC, 0.98; 95% confidence interval: 0.96, 0.99; P < .001). Conclusion A convolutional neural network was superior to radiomic analysis for classification of enhancing lesions as benign or malignant at multiparametric breast MRI. Both approaches were inferior to radiologists' performance; however, more training data will further improve performance of convolutional neural network, but not that of radiomics algorithms. © RSNA, 2018 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Daniel Truhn
- From the Departments of Diagnostic and Interventional Radiology (D.T., S.S., H.S., C.K.) and Institute of Imaging and Computer Vision (C.H., D.M.), RWTH Aachen University, Aachen, Pauwelsstr 30, 52074 Aachen, Germany
| | - Simone Schrading
- From the Departments of Diagnostic and Interventional Radiology (D.T., S.S., H.S., C.K.) and Institute of Imaging and Computer Vision (C.H., D.M.), RWTH Aachen University, Aachen, Pauwelsstr 30, 52074 Aachen, Germany
| | - Christoph Haarburger
- From the Departments of Diagnostic and Interventional Radiology (D.T., S.S., H.S., C.K.) and Institute of Imaging and Computer Vision (C.H., D.M.), RWTH Aachen University, Aachen, Pauwelsstr 30, 52074 Aachen, Germany
| | - Hannah Schneider
- From the Departments of Diagnostic and Interventional Radiology (D.T., S.S., H.S., C.K.) and Institute of Imaging and Computer Vision (C.H., D.M.), RWTH Aachen University, Aachen, Pauwelsstr 30, 52074 Aachen, Germany
| | - Dorit Merhof
- From the Departments of Diagnostic and Interventional Radiology (D.T., S.S., H.S., C.K.) and Institute of Imaging and Computer Vision (C.H., D.M.), RWTH Aachen University, Aachen, Pauwelsstr 30, 52074 Aachen, Germany
| | - Christiane Kuhl
- From the Departments of Diagnostic and Interventional Radiology (D.T., S.S., H.S., C.K.) and Institute of Imaging and Computer Vision (C.H., D.M.), RWTH Aachen University, Aachen, Pauwelsstr 30, 52074 Aachen, Germany
| |
Collapse
|