1
|
Sahu A, Das PK, Meher S. Recent advancements in machine learning and deep learning-based breast cancer detection using mammograms. Phys Med 2023; 114:103138. [PMID: 37914431 DOI: 10.1016/j.ejmp.2023.103138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 07/22/2023] [Accepted: 09/14/2023] [Indexed: 11/03/2023] Open
Abstract
OBJECTIVE Mammogram-based automatic breast cancer detection has a primary role in accurate cancer diagnosis and treatment planning to save valuable lives. Mammography is one basic yet efficient test for screening breast cancer. Very few comprehensive surveys have been presented to briefly analyze methods for detecting breast cancer with mammograms. In this article, our objective is to give an overview of recent advancements in machine learning (ML) and deep learning (DL)-based breast cancer detection systems. METHODS We give a structured framework to categorize mammogram-based breast cancer detection techniques. Several publicly available mammogram databases and different performance measures are also mentioned. RESULTS After deliberate investigation, we find most of the works classify breast tumors either as normal-abnormal or malignant-benign rather than classifying them into three classes. Furthermore, DL-based features are more significant than hand-crafted features. However, transfer learning is preferred over others as it yields better performance in small datasets, unlike classical DL techniques. SIGNIFICANCE AND CONCLUSION In this article, we have made an attempt to give recent advancements in artificial intelligence (AI)-based breast cancer detection systems. Furthermore, a number of challenging issues and possible research directions are mentioned, which will help researchers in further scopes of research in this field.
Collapse
Affiliation(s)
- Adyasha Sahu
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| | - Pradeep Kumar Das
- School of Electronics Engineering (SENSE), VIT Vellore, Tamil Nadu, 632014, India.
| | - Sukadev Meher
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| |
Collapse
|
2
|
Liu Y, Tong Y, Wan Y, Xia Z, Yao G, Shang X, Huang Y, Chen L, Chen DQ, Liu B. Identification and diagnosis of mammographic malignant architectural distortion using a deep learning based mask regional convolutional neural network. Front Oncol 2023; 13:1119743. [PMID: 37035200 PMCID: PMC10075355 DOI: 10.3389/fonc.2023.1119743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 02/27/2023] [Indexed: 04/11/2023] Open
Abstract
Background Architectural distortion (AD) is a common imaging manifestation of breast cancer, but is also seen in benign lesions. This study aimed to construct deep learning models using mask regional convolutional neural network (Mask-RCNN) for AD identification in full-field digital mammography (FFDM) and evaluate the performance of models for malignant AD diagnosis. Methods This retrospective diagnostic study was conducted at the Second Affiliated Hospital of Guangzhou University of Chinese Medicine between January 2011 and December 2020. Patients with AD in the breast in FFDM were included. Machine learning models for AD identification were developed using the Mask RCNN method. Receiver operating characteristics (ROC) curves, their areas under the curve (AUCs), and recall/sensitivity were used to evaluate the models. Models with the highest AUCs were selected for malignant AD diagnosis. Results A total of 349 AD patients (190 with malignant AD) were enrolled. EfficientNetV2, EfficientNetV1, ResNext, and ResNet were developed for AD identification, with AUCs of 0.89, 0.87, 0.81 and 0.79. The AUC of EfficientNetV2 was significantly higher than EfficientNetV1 (0.89 vs. 0.78, P=0.001) for malignant AD diagnosis, and the recall/sensitivity of the EfficientNetV2 model was 0.93. Conclusion The Mask-RCNN-based EfficientNetV2 model has a good diagnostic value for malignant AD.
Collapse
Affiliation(s)
- Yuanyuan Liu
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yunfei Tong
- Department of Engineering, Shanghai Yanghe Huajian Artificial Intelligence Technology Co., Ltd, Shanghai, China
| | - Yun Wan
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Ziqiang Xia
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Guoyan Yao
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaojing Shang
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yan Huang
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Lijun Chen
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Daniel Q. Chen
- Artificial Intelligence (AI), Research Lab, Boston Meditech Group, Burlington, MA, United States
- *Correspondence: Bo Liu, ; Daniel Q. Chen,
| | - Bo Liu
- Department of Radiology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
- *Correspondence: Bo Liu, ; Daniel Q. Chen,
| |
Collapse
|
3
|
Basurto-Hurtado JA, Cruz-Albarran IA, Toledano-Ayala M, Ibarra-Manzano MA, Morales-Hernandez LA, Perez-Ramirez CA. Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms. Cancers (Basel) 2022; 14:3442. [PMID: 35884503 PMCID: PMC9322973 DOI: 10.3390/cancers14143442] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/02/2022] [Accepted: 07/12/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer is one the main death causes for women worldwide, as 16% of the diagnosed malignant lesions worldwide are its consequence. In this sense, it is of paramount importance to diagnose these lesions in the earliest stage possible, in order to have the highest chances of survival. While there are several works that present selected topics in this area, none of them present a complete panorama, that is, from the image generation to its interpretation. This work presents a comprehensive state-of-the-art review of the image generation and processing techniques to detect Breast Cancer, where potential candidates for the image generation and processing are presented and discussed. Novel methodologies should consider the adroit integration of artificial intelligence-concepts and the categorical data to generate modern alternatives that can have the accuracy, precision and reliability expected to mitigate the misclassifications.
Collapse
Affiliation(s)
- Jesus A. Basurto-Hurtado
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Irving A. Cruz-Albarran
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Manuel Toledano-Ayala
- División de Investigación y Posgrado de la Facultad de Ingeniería (DIPFI), Universidad Autónoma de Querétaro, Cerro de las Campanas S/N Las Campanas, Santiago de Querétaro 76010, Mexico;
| | - Mario Alberto Ibarra-Manzano
- Laboratorio de Procesamiento Digital de Señales, Departamento de Ingeniería Electrónica, Division de Ingenierias Campus Irapuato-Salamanca (DICIS), Universidad de Guanajuato, Carretera Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico;
| | - Luis A. Morales-Hernandez
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
| | - Carlos A. Perez-Ramirez
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| |
Collapse
|
4
|
Chen X, Zhang K, Abdoli N, Gilley PW, Wang X, Liu H, Zheng B, Qiu Y. Transformers Improve Breast Cancer Diagnosis from Unregistered Multi-View Mammograms. Diagnostics (Basel) 2022; 12:diagnostics12071549. [PMID: 35885455 PMCID: PMC9320758 DOI: 10.3390/diagnostics12071549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/21/2022] [Accepted: 06/24/2022] [Indexed: 11/16/2022] Open
Abstract
Deep convolutional neural networks (CNNs) have been widely used in various medical imaging tasks. However, due to the intrinsic locality of convolution operations, CNNs generally cannot model long-range dependencies well, which are important for accurately identifying or mapping corresponding breast lesion features computed from unregistered multiple mammograms. This motivated us to leverage the architecture of Multi-view Vision Transformers to capture long-range relationships of multiple mammograms from the same patient in one examination. For this purpose, we employed local transformer blocks to separately learn patch relationships within four mammograms acquired from two-view (CC/MLO) of two-side (right/left) breasts. The outputs from different views and sides were concatenated and fed into global transformer blocks, to jointly learn patch relationships between four images representing two different views of the left and right breasts. To evaluate the proposed model, we retrospectively assembled a dataset involving 949 sets of mammograms, which included 470 malignant cases and 479 normal or benign cases. We trained and evaluated the model using a five-fold cross-validation method. Without any arduous preprocessing steps (e.g., optimal window cropping, chest wall or pectoral muscle removal, two-view image registration, etc.), our four-image (two-view-two-side) transformer-based model achieves case classification performance with an area under ROC curve (AUC = 0.818 ± 0.039), which significantly outperforms AUC = 0.784 ± 0.016 achieved by the state-of-the-art multi-view CNNs (p = 0.009). It also outperforms two one-view-two-side models that achieve AUC of 0.724 ± 0.013 (CC view) and 0.769 ± 0.036 (MLO view), respectively. The study demonstrates the potential of using transformers to develop high-performing computer-aided diagnosis schemes that combine four mammograms.
Collapse
Affiliation(s)
- Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
- Correspondence: (X.C.); (Y.Q.)
| | - Ke Zhang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Neman Abdoli
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | - Patrik W. Gilley
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | | | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
- Correspondence: (X.C.); (Y.Q.)
| |
Collapse
|
5
|
A Comparison of Computer-Aided Diagnosis Schemes Optimized Using Radiomics and Deep Transfer Learning Methods. Bioengineering (Basel) 2022; 9:bioengineering9060256. [PMID: 35735499 PMCID: PMC9219621 DOI: 10.3390/bioengineering9060256] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 05/25/2022] [Accepted: 06/13/2022] [Indexed: 01/29/2023] Open
Abstract
Objective: Radiomics and deep transfer learning are two popular technologies used to develop computer-aided detection and diagnosis (CAD) schemes of medical images. This study aims to investigate and to compare the advantages and the potential limitations of applying these two technologies in developing CAD schemes. Methods: A relatively large and diverse retrospective dataset including 3000 digital mammograms was assembled in which 1496 images depicted malignant lesions and 1504 images depicted benign lesions. Two CAD schemes were developed to classify breast lesions. The first scheme was developed using four steps namely, applying an adaptive multi-layer topographic region growing algorithm to segment lesions, computing initial radiomics features, applying a principal component algorithm to generate an optimal feature vector, and building a support vector machine classifier. The second CAD scheme was built based on a pre-trained residual net architecture (ResNet50) as a transfer learning model to classify breast lesions. Both CAD schemes were trained and tested using a 10-fold cross-validation method. Several score fusion methods were also investigated to classify breast lesions. CAD performances were evaluated and compared by the areas under the ROC curve (AUC). Results: The ResNet50 model-based CAD scheme yielded AUC = 0.85 ± 0.02, which was significantly higher than the radiomics feature-based CAD scheme with AUC = 0.77 ± 0.02 (p < 0.01). Additionally, the fusion of classification scores generated by the two CAD schemes did not further improve classification performance. Conclusion: This study demonstrates that using deep transfer learning is more efficient to develop CAD schemes and it enables a higher lesion classification performance than CAD schemes developed using radiomics-based technology.
Collapse
|
6
|
Jones MA, Faiz R, Qiu Y, Zheng B. Improving mammography lesion classification by optimal fusion of handcrafted and deep transfer learning features. Phys Med Biol 2022; 67:10.1088/1361-6560/ac5297. [PMID: 35130517 PMCID: PMC8935657 DOI: 10.1088/1361-6560/ac5297] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 02/07/2022] [Indexed: 12/20/2022]
Abstract
Objective.Handcrafted radiomics features or deep learning model-generated automated features are commonly used to develop computer-aided diagnosis schemes of medical images. The objective of this study is to test the hypothesis that handcrafted and automated features contain complementary classification information and fusion of these two types of features can improve CAD performance.Approach.We retrospectively assembled a dataset involving 1535 lesions (740 malignant and 795 benign). Regions of interest (ROI) surrounding suspicious lesions are extracted and two types of features are computed from each ROI. The first one includes 40 radiomic features and the second one includes automated features computed from a VGG16 network using a transfer learning method. A single channel ROI image is converted to three channel pseudo-ROI images by stacking the original image, a bilateral filtered image, and a histogram equalized image. Two VGG16 models using pseudo-ROIs and 3 stacked original ROIs without pre-processing are used to extract automated features. Five linear support vector machines (SVM) are built using the optimally selected feature vectors from the handcrafted features, two sets of VGG16 model-generated automated features, and the fusion of handcrafted and each set of automated features, respectively.Main Results.Using a 10-fold cross-validation, the fusion SVM using pseudo-ROIs yields the highest lesion classification performance with area under ROC curve (AUC = 0.756 ± 0.042), which is significantly higher than those yielded by other SVMs trained using handcrafted or automated features only (p < 0.05).Significance.This study demonstrates that both handcrafted and automated futures contain useful information to classify breast lesions. Fusion of these two types of features can further increase CAD performance.
Collapse
Affiliation(s)
- Meredith A. Jones
- School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Rowzat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
7
|
Danala G, Desai M, Ray B, Heidari M, Maryada SKR, Prodan CI, Zheng B. Applying Quantitative Radiographic Image Markers to Predict Clinical Complications After Aneurysmal Subarachnoid Hemorrhage: A Pilot Study. Ann Biomed Eng 2022; 50:413-425. [PMID: 35112157 PMCID: PMC8918043 DOI: 10.1007/s10439-022-02926-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 01/24/2022] [Indexed: 12/14/2022]
Abstract
Accurately predicting clinical outcome of aneurysmal subarachnoid hemorrhage (aSAH) patients is difficult. The purpose of this study was to develop and test a new fully-automated computer-aided detection (CAD) scheme of brain computed tomography (CT) images to predict prognosis of aSAH patients. A retrospective dataset of 59 aSAH patients was assembled. Each patient had 2 sets of CT images acquired at admission and prior-to-discharge. CAD scheme was applied to segment intracranial brain regions into four subregions, namely, cerebrospinal fluid (CSF), white matter (WM), gray matter (GM), and leaked extraparenchymal blood (EPB), respectively. CAD then detects sulci and computes 9 image features related to 5 volumes of the segmented sulci, EPB, CSF, WM, and GM and 4 volumetrical ratios to sulci. Subsequently, applying a leave-one-case-out cross-validation method embedded with a principal component analysis (PCA) algorithm to generate optimal feature vector, 16 support vector machine (SVM) models were built using CT images acquired either at admission or prior-to-discharge to predict each of eight clinically relevant parameters commonly used to assess patients' prognosis. Finally, a receiver operating characteristics (ROC) method was used to evaluate SVM model performance. Areas under ROC curves of 16 SVM models range from 0.62 ± 0.07 to 0.86 ± 0.07. In general, SVM models trained using CT images acquired at admission yielded higher accuracy to predict short-term clinical outcomes, while SVM models trained using CT images acquired prior-to-discharge demonstrated higher accuracy in predicting long-term clinical outcomes. This study demonstrates feasibility to predict prognosis of aSAH patients using new quantitative image markers generated by SVM models.
Collapse
Affiliation(s)
- Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, 101 David L Boren Blvd, Norman, OK, 73019, USA.
| | - Masoom Desai
- Department of Neurology, University of Oklahoma Medical Center, Oklahoma City, OK, USA
| | - Bappaditya Ray
- Division of Neurocritical Care, Department of Neurology and Neurological Surgery, UT Southwestern Medical Center, Dallas, TX, USA
| | - Morteza Heidari
- School of Electrical and Computer Engineering, University of Oklahoma, 101 David L Boren Blvd, Norman, OK, 73019, USA
| | | | - Calin I Prodan
- Department of Neurology, University of Oklahoma Medical Center, Oklahoma City, OK, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, 101 David L Boren Blvd, Norman, OK, 73019, USA
| |
Collapse
|
8
|
Rehman KU, Li J, Pei Y, Yasin A, Ali S, Saeed Y. Architectural Distortion-Based Digital Mammograms Classification Using Depth Wise Convolutional Neural Network. BIOLOGY 2021; 11:15. [PMID: 35053013 PMCID: PMC8773233 DOI: 10.3390/biology11010015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/15/2021] [Accepted: 12/17/2021] [Indexed: 01/29/2023]
Abstract
Architectural distortion is the third most suspicious appearance on a mammogram representing abnormal regions. Architectural distortion (AD) detection from mammograms is challenging due to its subtle and varying asymmetry on breast mass and small size. Automatic detection of abnormal ADs regions in mammograms using computer algorithms at initial stages could help radiologists and doctors. The architectural distortion star shapes ROIs detection, noise removal, and object location, affecting the classification performance, reducing accuracy. The computer vision-based technique automatically removes the noise and detects the location of objects from varying patterns. The current study investigated the gap to detect architectural distortion ROIs (region of interest) from mammograms using computer vision techniques. Proposed an automated computer-aided diagnostic system based on architectural distortion using computer vision and deep learning to predict breast cancer from digital mammograms. The proposed mammogram classification framework pertains to four steps such as image preprocessing, augmentation and image pixel-wise segmentation. Architectural distortion ROI's detection, training deep learning, and machine learning networks to classify AD's ROIs into malignant and benign classes. The proposed method has been evaluated on three databases, the PINUM, the CBIS-DDSM, and the DDSM mammogram images, using computer vision and depth-wise 2D V-net 64 convolutional neural networks and achieved 0.95, 0.97, and 0.98 accuracies, respectively. Experimental results reveal that our proposed method outperforms as compared with the ShuffelNet, MobileNet, SVM, K-NN, RF, and previous studies.
Collapse
Affiliation(s)
- Khalil ur Rehman
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (Y.S.)
| | - Jianqiang Li
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (Y.S.)
- Beijing Engineering Research Center for IoT Software and Systems, Beijing 100124, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu 965-8580, Fukushima, Japan
| | - Anaa Yasin
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (Y.S.)
| | - Saqib Ali
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (Y.S.)
| | - Yousaf Saeed
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (Y.S.)
| |
Collapse
|
9
|
Zargari Khuzani A, Heidari M, Shariati SA. COVID-Classifier: an automated machine learning model to assist in the diagnosis of COVID-19 infection in chest X-ray images. Sci Rep 2021; 11:9887. [PMID: 33972584 PMCID: PMC8110795 DOI: 10.1038/s41598-021-88807-2] [Citation(s) in RCA: 59] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 02/04/2021] [Indexed: 02/07/2023] Open
Abstract
Chest-X ray (CXR) radiography can be used as a first-line triage process for non-COVID-19 patients with pneumonia. However, the similarity between features of CXR images of COVID-19 and pneumonia caused by other infections makes the differential diagnosis by radiologists challenging. We hypothesized that machine learning-based classifiers can reliably distinguish the CXR images of COVID-19 patients from other forms of pneumonia. We used a dimensionality reduction method to generate a set of optimal features of CXR images to build an efficient machine learning classifier that can distinguish COVID-19 cases from non-COVID-19 cases with high accuracy and sensitivity. By using global features of the whole CXR images, we successfully implemented our classifier using a relatively small dataset of CXR images. We propose that our COVID-Classifier can be used in conjunction with other tests for optimal allocation of hospital resources by rapid triage of non-COVID-19 cases.
Collapse
Affiliation(s)
- Abolfazl Zargari Khuzani
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Morteza Heidari
- School of Electrical and Computer Engineering, The University of Oklahoma, Norman, OK, USA
| | - S Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA.
| |
Collapse
|
10
|
Caballo M, Hernandez AM, Lyu SH, Teuwen J, Mann RM, van Ginneken B, Boone JM, Sechopoulos I. Computer-aided diagnosis of masses in breast computed tomography imaging: deep learning model with combined handcrafted and convolutional radiomic features. J Med Imaging (Bellingham) 2021; 8:024501. [PMID: 33796604 DOI: 10.1117/1.jmi.8.2.024501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 03/12/2021] [Indexed: 12/30/2022] Open
Abstract
Purpose: A computer-aided diagnosis (CADx) system for breast masses is proposed, which incorporates both handcrafted and convolutional radiomic features embedded into a single deep learning model. Approach: The model combines handcrafted and convolutional radiomic signatures into a multi-view architecture, which retrieves three-dimensional (3D) image information by simultaneously processing multiple two-dimensional mass patches extracted along different planes through the 3D mass volume. Each patch is processed by a stream composed of two concatenated parallel branches: a multi-layer perceptron fed with automatically extracted handcrafted radiomic features, and a convolutional neural network, for which discriminant features are learned from the input patches. All streams are then concatenated together into a final architecture, where all network weights are shared and the learning occurs simultaneously for each stream and branch. The CADx system was developed and tested for diagnosis of breast masses ( N = 284 ) using image datasets acquired with independent dedicated breast computed tomography systems from two different institutions. The diagnostic classification performance of the CADx system was compared against other machine and deep learning architectures adopting handcrafted and convolutional approaches, and three board-certified breast radiologists. Results: On a test set of 82 masses (45 benign, 37 malignant), the proposed CADx system performed better than all other model architectures evaluated, with an increase in the area under the receiver operating characteristics curve (AUC) of 0.05 ± 0.02 , and achieving a final AUC of 0.947, outperforming the three radiologists ( AUC = 0.814 - 0.902 ). Conclusions: In conclusion, the system demonstrated its potential usefulness in breast cancer diagnosis by improving mass malignancy assessment.
Collapse
Affiliation(s)
- Marco Caballo
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Andrew M Hernandez
- University of California Davis, Department of Radiology, Sacramento, California, United States
| | - Su Hyun Lyu
- University of California Davis, Department of Biomedical Engineering, Sacramento, California, United States
| | - Jonas Teuwen
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands.,The Netherlands Cancer Institute, Department of Radiation Oncology, Amsterdam, The Netherlands
| | - Ritse M Mann
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands.,The Netherlands Cancer Institute, Department of Radiology, Amsterdam, The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - John M Boone
- University of California Davis, Department of Radiology, Sacramento, California, United States.,University of California Davis, Department of Biomedical Engineering, Sacramento, California, United States
| | - Ioannis Sechopoulos
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands.,Dutch Expert Center for Screening, Nijmegen, The Netherlands
| |
Collapse
|
11
|
Heidari M, Lakshmivarahan S, Mirniaharikandehei S, Danala G, Maryada SKR, Liu H, Zheng B. Applying a Random Projection Algorithm to Optimize Machine Learning Model for Breast Lesion Classification. IEEE Trans Biomed Eng 2021; 68:2764-2775. [PMID: 33493108 DOI: 10.1109/tbme.2021.3054248] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
OBJECTIVE Since computer-aided diagnosis (CAD) schemes of medical images usually computes large number of image features, which creates a challenge of how to identify a small and optimal feature vector to build robust machine learning models, the objective of this study is to investigate feasibility of applying a random projection algorithm (RPA) to build an optimal feature vector from the initially CAD-generated large feature pool and improve performance of machine learning model. METHODS We assemble a retrospective dataset involving 1,487 cases of mammograms in which 644 cases have confirmed malignant mass lesions and 843 have benign lesions. A CAD scheme is first applied to segment mass regions and initially compute 181 features. Then, support vector machine (SVM) models embedded with several feature dimensionality reduction methods are built to predict likelihood of lesions being malignant. All SVM models are trained and tested using a leave-one-case-out cross-validation method. SVM generates a likelihood score of each segmented mass region depicting on one-view mammogram. By fusion of two scores of the same mass depicting on two-view mammograms, a case-based likelihood score is also evaluated. RESULTS Comparing with the principle component analyses, nonnegative matrix factorization, and Chi-squared methods, SVM embedded with RPA yielded a significantly higher case-based lesion classification performance with the area under ROC curve of 0.84 ± 0.01 (p<0.02). CONCLUSION The study demonstrates that RPA is a promising method to generate optimal feature vectors and improve SVM performance. SIGNIFICANCE This study presents a new method to develop CAD schemes with significantly higher and robust performance.
Collapse
|
12
|
Tan M, Al-Shabi M, Chan WY, Thomas L, Rahmat K, Ng KH. Comparison of two-dimensional synthesized mammograms versus original digital mammograms: a quantitative assessment. Med Biol Eng Comput 2021; 59:355-367. [PMID: 33447988 DOI: 10.1007/s11517-021-02313-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 01/07/2021] [Indexed: 12/13/2022]
Abstract
This study objectively evaluates the similarity between standard full-field digital mammograms and two-dimensional synthesized digital mammograms (2DSM) in a cohort of women undergoing mammography. Under an institutional review board-approved data collection protocol, we retrospectively analyzed 407 women with digital breast tomosynthesis (DBT) and full-field digital mammography (FFDM) examinations performed from September 1, 2014, through February 29, 2016. Both FFDM and 2DSM images were used for the analysis, and 3216 available craniocaudal (CC) and mediolateral oblique (MLO) view mammograms altogether were included in the dataset. We analyzed the mammograms using a fully automated algorithm that computes 152 structural similarity, texture, and mammographic density-based features. We trained and developed two different global mammographic image feature analysis-based breast cancer detection schemes for 2DSM and FFDM images, respectively. The highest structural similarity features were obtained on the coarse Weber Local Descriptor differential excitation texture feature component computed on the CC view images (0.8770) and MLO view images (0.8889). Although the coarse structures are similar, the global mammographic image feature-based cancer detection scheme trained on 2DSM images outperformed the corresponding scheme trained on FFDM images, with area under a receiver operating characteristic curve (AUC) = 0.878 ± 0.034 and 0.756 ± 0.052, respectively. Consequently, further investigation is required to examine whether DBT can replace FFDM as a standalone technique, especially for the development of automated objective-based methods.
Collapse
Affiliation(s)
- Maxine Tan
- Electrical and Computer Systems Engineering Discipline, School of Engineering, Monash University Malaysia, Jalan Lagoon Selatan, Bandar Sunway, 47500, Subang Jaya, Selangor, Malaysia. .,School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA.
| | - Mundher Al-Shabi
- Electrical and Computer Systems Engineering Discipline, School of Engineering, Monash University Malaysia, Jalan Lagoon Selatan, Bandar Sunway, 47500, Subang Jaya, Selangor, Malaysia
| | - Wai Yee Chan
- Department of Biomedical Imaging and University of Malaya Research Imaging Centre, Faculty of Medicine, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Leya Thomas
- Department of Biomedical Imaging and University of Malaya Research Imaging Centre, Faculty of Medicine, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Kartini Rahmat
- Department of Biomedical Imaging and University of Malaya Research Imaging Centre, Faculty of Medicine, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Kwan Hoong Ng
- Department of Biomedical Imaging and University of Malaya Research Imaging Centre, Faculty of Medicine, University of Malaya, 50603, Kuala Lumpur, Malaysia
| |
Collapse
|
13
|
Heidari M, Mirniaharikandehei S, Khuzani AZ, Danala G, Qiu Y, Zheng B. Improving the performance of CNN to predict the likelihood of COVID-19 using chest X-ray images with preprocessing algorithms. Int J Med Inform 2020; 144:104284. [PMID: 32992136 PMCID: PMC7510591 DOI: 10.1016/j.ijmedinf.2020.104284] [Citation(s) in RCA: 146] [Impact Index Per Article: 36.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 09/17/2020] [Accepted: 09/21/2020] [Indexed: 01/06/2023]
Abstract
OBJECTIVE This study aims to develop and test a new computer-aided diagnosis (CAD) scheme of chest X-ray images to detect coronavirus (COVID-19) infected pneumonia. METHOD CAD scheme first applies two image preprocessing steps to remove the majority of diaphragm regions, process the original image using a histogram equalization algorithm, and a bilateral low-pass filter. Then, the original image and two filtered images are used to form a pseudo color image. This image is fed into three input channels of a transfer learning-based convolutional neural network (CNN) model to classify chest X-ray images into 3 classes of COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) cases. To build and test the CNN model, a publicly available dataset involving 8474 chest X-ray images is used, which includes 415, 5179 and 2,880 cases in three classes, respectively. Dataset is randomly divided into 3 subsets namely, training, validation, and testing with respect to the same frequency of cases in each class to train and test the CNN model. RESULTS The CNN-based CAD scheme yields an overall accuracy of 94.5 % (2404/2544) with a 95 % confidence interval of [0.93,0.96] in classifying 3 classes. CAD also yields 98.4 % sensitivity (124/126) and 98.0 % specificity (2371/2418) in classifying cases with and without COVID-19 infection. However, without using two preprocessing steps, CAD yields a lower classification accuracy of 88.0 % (2239/2544). CONCLUSION This study demonstrates that adding two image preprocessing steps and generating a pseudo color image plays an important role in developing a deep learning CAD scheme of chest X-ray images to improve accuracy in detecting COVID-19 infected pneumonia.
Collapse
Affiliation(s)
- Morteza Heidari
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA.
| | | | - Abolfazl Zargari Khuzani
- Department of Electrical and Computer Engineering, University of California Santa Cruz, Santa Cruz, CA 95064, USA
| | - Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
14
|
Khuzani AZ, Heidari M, Shariati SA. COVID-Classifier: An automated machine learning model to assist in the diagnosis of COVID-19 infection in chest x-ray images. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2020:2020.05.09.20096560. [PMID: 32511510 PMCID: PMC7273278 DOI: 10.1101/2020.05.09.20096560] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Chest-X ray (CXR) radiography can be used as a first-line triage process for non-COVID-19 patients with pneumonia. However, the similarity between features of CXR images of COVID-19 and pneumonia caused by other infections make the differential diagnosis by radiologists challenging. We hypothesized that machine learning-based classifiers can reliably distinguish the CXR images of COVID-19 patients from other forms of pneumonia. We used a dimensionality reduction method to generate a set of optimal features of CXR images to build an efficient machine learning classifier that can distinguish COVID-19 cases from non-COVID-19 cases with high accuracy and sensitivity. By using global features of the whole CXR images, we were able to successfully implement our classifier using a relatively small dataset of CXR images. We propose that our COVID-Classifier can be used in conjunction with other tests for optimal allocation of hospital resources by rapid triage of non-COVID-19 cases.
Collapse
Affiliation(s)
- Abolfazl Zargari Khuzani
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA
| | - Morteza Heidari
- School of Electrical and Computer Engineering, The University of Oklahoma, Norman, OK
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA
| |
Collapse
|
15
|
AlKubeyyer A, Ben Ismail MM, Bchir O, Alkubeyyer M. Automatic detection of the meningioma tumor firmness in MRI images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:659-682. [PMID: 32538892 DOI: 10.3233/xst-200644] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Meningioma is among the most common primary tumors of the brain. The firmness of Meningioma is a critical factor that influences operative strategy and patient counseling. Conventional methods to predict the tumor firmness rely on the correlation between the consistency of Meningioma and their preoperative MRI findings such as the signal intensity ratio between the tumor and the normal grey matter of the brain. Machine learning techniques have not been investigated yet to address the Meningioma firmness detection problem. The main purpose of this research is to couple supervised learning algorithms with typical descriptors for developing a computer-aided detection (CAD) of the Meningioma tumor firmness in MRI images. Specifically, Local Binary Patterns (LBP), Gray Level Co-occurrence Matrix (GLCM) and Discrete Wavelet Transform (DWT) are extracted from real labeled MRI-T2 weighted images and fed into classifiers, namely support vector machine (SVM) and k-nearest neighbor (KNN) algorithm to learn association between the visual properties of the region of interest and the pre-defined firm and soft classes. The learned model is then used to classify unlabeled MRI-T2 weighted images. This paper represents a baseline comparison of different features used in CAD system that intends to accurately recognize the Meningioma tumor firmness. The proposed system was implemented and assessed using a clinical dataset. Using LBP feature yielded the best performance with 95% of F-score, 87% of balanced accuracy and 0.87 of the area under ROC curve (AUC) when coupled with KNN classifier, respectively.
Collapse
Affiliation(s)
- Atheer AlKubeyyer
- Computer Science Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Mohamed Maher Ben Ismail
- Computer Science Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Ouiem Bchir
- Computer Science Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Metab Alkubeyyer
- Department of Radiology and Medical Imaging, King Khalid University Hospital., King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|