1
|
Attallah O. Skin-CAD: Explainable deep learning classification of skin cancer from dermoscopic images by feature selection of dual high-level CNNs features and transfer learning. Comput Biol Med 2024; 178:108798. [PMID: 38925085 DOI: 10.1016/j.compbiomed.2024.108798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/30/2024] [Accepted: 06/19/2024] [Indexed: 06/28/2024]
Abstract
Skin cancer (SC) significantly impacts many individuals' health all over the globe. Hence, it is imperative to promptly identify and diagnose such conditions at their earliest stages using dermoscopic imaging. Computer-aided diagnosis (CAD) methods relying on deep learning techniques especially convolutional neural networks (CNN) can effectively address this issue with outstanding outcomes. Nevertheless, such black box methodologies lead to a deficiency in confidence as dermatologists are incapable of comprehending and verifying the predictions that were made by these models. This article presents an advanced an explainable artificial intelligence (XAI) based CAD system named "Skin-CAD" which is utilized for the classification of dermoscopic photographs of SC. The system accurately categorises the photographs into two categories: benign or malignant, and further classifies them into seven subclasses of SC. Skin-CAD employs four CNNs of different topologies and deep layers. It gathers features out of a pair of deep layers of every CNN, particularly the final pooling and fully connected layers, rather than merely depending on attributes from a single deep layer. Skin-CAD applies the principal component analysis (PCA) dimensionality reduction approach to minimise the dimensions of pooling layer features. This also reduces the complexity of the training procedure compared to using deep features from a CNN that has a substantial size. Furthermore, it combines the reduced pooling features with the fully connected features of each CNN. Additionally, Skin-CAD integrates the dual-layer features of the four CNNs instead of entirely depending on the features of a single CNN architecture. In the end, it utilizes a feature selection step to determine the most important deep attributes. This helps to decrease the general size of the feature set and streamline the classification process. Predictions are analysed in more depth using the local interpretable model-agnostic explanations (LIME) approach. This method is used to create visual interpretations that align with an already existing viewpoint and adhere to recommended standards for general clarifications. Two benchmark datasets are employed to validate the efficiency of Skin-CAD which are the Skin Cancer: Malignant vs. Benign and HAM10000 datasets. The maximum accuracy achieved using Skin-CAD is 97.2 % and 96.5 % for the Skin Cancer: Malignant vs. Benign and HAM10000 datasets respectively. The findings of Skin-CAD demonstrate its potential to assist professional dermatologists in detecting and classifying SC precisely and quickly.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandri, 21937, Egypt; Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
2
|
Khan H, Abu-Raisi M, Feasson M, Shaikh F, Saposnik G, Mamdani M, Qadura M. Current Prognostic Biomarkers for Abdominal Aortic Aneurysm: A Comprehensive Scoping Review of the Literature. Biomolecules 2024; 14:661. [PMID: 38927064 PMCID: PMC11201473 DOI: 10.3390/biom14060661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 05/29/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024] Open
Abstract
Abdominal aortic aneurysm (AAA) is a progressive dilatation of the aorta that can lead to aortic rupture. The pathophysiology of the disease is not well characterized but is known to be caused by the general breakdown of the extracellular matrix within the aortic wall. In this comprehensive literature review, all current research on proteins that have been investigated for their potential prognostic capabilities in patients with AAA was included. A total of 45 proteins were found to be potential prognostic biomarkers for AAA, predicting incidence of AAA, AAA rupture, AAA growth, endoleak, and post-surgical mortality. The 45 proteins fell into the following seven general categories based on their primary function: (1) cardiovascular health, (2) hemostasis, (3) transport proteins, (4) inflammation and immunity, (5) kidney function, (6) cellular structure, (7) and hormones and growth factors. This is the most up-to-date literature review on current prognostic markers for AAA and their functions. This review outlines the wide pathophysiological processes that are implicated in AAA disease progression.
Collapse
Affiliation(s)
- Hamzah Khan
- Division of Vascular Surgery, St. Michael’s Hospital, Toronto, ON M5B 1W8, Canada
- Li Ka Shing Knowledge Institute, St Michael’s Hospital, Unity Health Toronto, Toronto, ON M5B 1W8, Canada
| | - Mohamed Abu-Raisi
- Division of Vascular Surgery, St. Michael’s Hospital, Toronto, ON M5B 1W8, Canada
- Li Ka Shing Knowledge Institute, St Michael’s Hospital, Unity Health Toronto, Toronto, ON M5B 1W8, Canada
| | - Manon Feasson
- Li Ka Shing Knowledge Institute, St Michael’s Hospital, Unity Health Toronto, Toronto, ON M5B 1W8, Canada
| | - Farah Shaikh
- Division of Vascular Surgery, St. Michael’s Hospital, Toronto, ON M5B 1W8, Canada
- Li Ka Shing Knowledge Institute, St Michael’s Hospital, Unity Health Toronto, Toronto, ON M5B 1W8, Canada
| | - Gustavo Saposnik
- Li Ka Shing Knowledge Institute, St Michael’s Hospital, Unity Health Toronto, Toronto, ON M5B 1W8, Canada
- Division of Neurology, Department of Medicine, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Muhammad Mamdani
- Li Ka Shing Knowledge Institute, St Michael’s Hospital, Unity Health Toronto, Toronto, ON M5B 1W8, Canada
| | - Mohammad Qadura
- Division of Vascular Surgery, St. Michael’s Hospital, Toronto, ON M5B 1W8, Canada
- Li Ka Shing Knowledge Institute, St Michael’s Hospital, Unity Health Toronto, Toronto, ON M5B 1W8, Canada
- Department of Surgery, University of Toronto, Toronto, ON M5T 1P5, Canada
| |
Collapse
|
3
|
Sharkas M, Attallah O. Color-CADx: a deep learning approach for colorectal cancer classification through triple convolutional neural networks and discrete cosine transform. Sci Rep 2024; 14:6914. [PMID: 38519513 PMCID: PMC10959971 DOI: 10.1038/s41598-024-56820-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 03/11/2024] [Indexed: 03/25/2024] Open
Abstract
Colorectal cancer (CRC) exhibits a significant death rate that consistently impacts human lives worldwide. Histopathological examination is the standard method for CRC diagnosis. However, it is complicated, time-consuming, and subjective. Computer-aided diagnostic (CAD) systems using digital pathology can help pathologists diagnose CRC faster and more accurately than manual histopathology examinations. Deep learning algorithms especially convolutional neural networks (CNNs) are advocated for diagnosis of CRC. Nevertheless, most previous CAD systems obtained features from one CNN, these features are of huge dimension. Also, they relied on spatial information only to achieve classification. In this paper, a CAD system is proposed called "Color-CADx" for CRC recognition. Different CNNs namely ResNet50, DenseNet201, and AlexNet are used for end-to-end classification at different training-testing ratios. Moreover, features are extracted from these CNNs and reduced using discrete cosine transform (DCT). DCT is also utilized to acquire spectral representation. Afterward, it is used to further select a reduced set of deep features. Furthermore, DCT coefficients obtained in the previous step are concatenated and the analysis of variance (ANOVA) feature selection approach is applied to choose significant features. Finally, machine learning classifiers are employed for CRC classification. Two publicly available datasets were investigated which are the NCT-CRC-HE-100 K dataset and the Kather_texture_2016_image_tiles dataset. The highest achieved accuracy reached 99.3% for the NCT-CRC-HE-100 K dataset and 96.8% for the Kather_texture_2016_image_tiles dataset. DCT and ANOVA have successfully lowered feature dimensionality thus reducing complexity. Color-CADx has demonstrated efficacy in terms of accuracy, as its performance surpasses that of the most recent advancements.
Collapse
Affiliation(s)
- Maha Sharkas
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt
| | - Omneya Attallah
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt.
- Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
4
|
Long B, Cremat DL, Serpa E, Qian S, Blebea J. Applying Artificial Intelligence to Predict Complications After Endovascular Aneurysm Repair. Vasc Endovascular Surg 2024; 58:65-75. [PMID: 37429299 DOI: 10.1177/15385744231189024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Objective: Complications after Endovascular Aneurysm Repair (EVAR) can be fatal. Patient follow-up for surveillance imaging is becoming more challenging as fewer patients are seen, particularly after the first year. The aim of this study was to develop an artificial intelligence model to predict the complication probability of individual patients to better identify those needing more intensive post-operative surveillance. Methods: Pre-operative CTA 3D reconstruction images of AAA from 273 patients who underwent EVAR from 2011-2020 were collected. Of these, 48 patients had post-operative complications including endoleak, AAA rupture, graft limb occlusion, renal artery occlusion, and neck dilation. A deep convolutional neural network model (VascAI©) was developed which utilized pre-operative 3D CT images to predict risk of complications after EVAR. The model was built with TensorFlow software and run on the Google Colab Platform. An initial training subset of 40 randomly selected patients with complications and 189 without were used to train the AI model while the remaining 8 positive and 36 negative cases tested its performance and prediction accuracy. Data down-sampling was used to alleviate data imbalance and data augmentation methodology to further boost model performance. Results: Successful training was completed on the 229 cases in the training set and then applied to predict the complication probability of each individual in the held-out performance testing cases. The model provided a complication sensitivity of 100% and identified all the patients who later developed complications after EVAR. Of 36 patients without complications, 16 (44%) were falsely predicted to develop complications. The results therefore demonstrated excellent sensitivity for identifying patients who would benefit from more stringent surveillance and decrease the frequency of surveillance in 56% of patients unlike to develop complications. Conclusion: AI models can be developed to predict the risk of post-operative complications with high accuracy. Compared to existing methods, the model developed in this study did not require any expert-annotated data but only the AAA CTA images as inputs. This model can play an assistive role in identifying patients at high risk for post-EVAR complications and the need for greater compliance in surveillance.
Collapse
Affiliation(s)
- Becky Long
- Department of Surgery, College of Medicine, Central Michigan University, Saginaw, MI, USA
| | - Danielle L Cremat
- Department of Surgery, College of Medicine, Central Michigan University, Saginaw, MI, USA
| | - Eduardo Serpa
- Department of Surgery, College of Medicine, Central Michigan University, Saginaw, MI, USA
| | - Sinong Qian
- Department of Surgery, College of Medicine, Central Michigan University, Saginaw, MI, USA
| | - John Blebea
- Department of Surgery, College of Medicine, Central Michigan University, Saginaw, MI, USA
| |
Collapse
|
5
|
Li B, Aljabri B, Verma R, Beaton D, Eisenberg N, Lee DS, Wijeysundera DN, Forbes TL, Rotstein OD, de Mestral C, Mamdani M, Roche-Nagle G, Al-Omran M. Machine learning to predict outcomes following endovascular abdominal aortic aneurysm repair. Br J Surg 2023; 110:1840-1849. [PMID: 37710397 DOI: 10.1093/bjs/znad287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 07/27/2023] [Accepted: 08/27/2023] [Indexed: 09/16/2023]
Abstract
BACKGROUND Endovascular aneurysm repair (EVAR) for abdominal aortic aneurysm (AAA) carries important perioperative risks; however, there are no widely used outcome prediction tools. The aim of this study was to apply machine learning (ML) to develop automated algorithms that predict 1-year mortality following EVAR. METHODS The Vascular Quality Initiative database was used to identify patients who underwent elective EVAR for infrarenal AAA between 2003 and 2023. Input features included 47 preoperative demographic/clinical variables. The primary outcome was 1-year all-cause mortality. Data were split into training (70 per cent) and test (30 per cent) sets. Using 10-fold cross-validation, 6 ML models were trained using preoperative features with logistic regression as the baseline comparator. The primary model evaluation metric was area under the receiver operating characteristic curve (AUROC). Model robustness was evaluated with calibration plot and Brier score. RESULTS Some 63 655 patients were included. One-year mortality occurred in 3122 (4.9 per cent) patients. The best performing prediction model for 1-year mortality was XGBoost, achieving an AUROC (95 per cent c.i.) of 0.96 (0.95-0.97). Comparatively, logistic regression had an AUROC (95 per cent c.i.) of 0.69 (0.68-0.71). The calibration plot showed good agreement between predicted and observed event probabilities with a Brier score of 0.04. The top 3 predictive features in the algorithm were 1) unfit for open AAA repair, 2) functional status, and 3) preoperative dialysis. CONCLUSIONS In this data set, machine learning was able to predict 1-year mortality following EVAR using preoperative data and outperformed standard logistic regression models.
Collapse
Affiliation(s)
- Ben Li
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Division of Vascular Surgery, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Temerty Centre for Artificial Intelligence Research and Education in Medicine (T-CAIREM), University of Toronto, Toronto, Ontario, Canada
| | - Badr Aljabri
- Department of Surgery, King Saud University, Riyadh, Kingdom of Saudi Arabia
| | - Raj Verma
- School of Medicine, Royal College of Surgeons in Ireland, University of Medicine and Health Sciences, Dublin, Ireland
| | - Derek Beaton
- Data Science & Advanced Analytics, Unity Health Toronto, University of Toronto, Toronto, Ontario, Canada
| | - Naomi Eisenberg
- Division of Vascular Surgery, Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
| | - Douglas S Lee
- Division of Cardiology, Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
- ICES, University of Toronto, Toronto, Ontario, Canada
| | - Duminda N Wijeysundera
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
- ICES, University of Toronto, Toronto, Ontario, Canada
- Department of Anesthesia, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada
| | - Thomas L Forbes
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Division of Vascular Surgery, Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
| | - Ori D Rotstein
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada
- Division of General Surgery, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada
| | - Charles de Mestral
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Division of Vascular Surgery, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
- ICES, University of Toronto, Toronto, Ontario, Canada
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada
| | - Muhammad Mamdani
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Temerty Centre for Artificial Intelligence Research and Education in Medicine (T-CAIREM), University of Toronto, Toronto, Ontario, Canada
- Data Science & Advanced Analytics, Unity Health Toronto, University of Toronto, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
- ICES, University of Toronto, Toronto, Ontario, Canada
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada
- Leslie Dan Faculty of Pharmacy, University of Toronto, Toronto, Ontario, Canada
| | - Graham Roche-Nagle
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Division of Vascular Surgery, Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
| | - Mohammed Al-Omran
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Division of Vascular Surgery, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Temerty Centre for Artificial Intelligence Research and Education in Medicine (T-CAIREM), University of Toronto, Toronto, Ontario, Canada
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, Ontario, Canada
- Department of Surgery, King Faisal Specialist Hospital and Research Center, Riyadh, Kingdom of Saudi Arabia
| |
Collapse
|
6
|
Asaadi S, Martins KN, Lee MM, Pantoja JL. Artificial intelligence for the vascular surgeon. Semin Vasc Surg 2023; 36:394-400. [PMID: 37863611 DOI: 10.1053/j.semvascsurg.2023.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 04/22/2023] [Accepted: 05/24/2023] [Indexed: 10/22/2023]
Abstract
In recent years, artificial intelligence (AI) has permeated different aspects of vascular surgery to solve challenges in clinical practice. Although AI in vascular surgery is still in its early stages, there have been promising developments in its applications to vascular diagnosis, risk stratification, and outcome prediction. By establishing a baseline knowledge of AI, vascular surgeons are better equipped to use and interpret the data from these types of projects. This review aims to provide an overview of the fundamentals of AI and highlight its role in helping vascular surgeons overcome the challenges of clinical practice. In addition, we discuss the limitations of AI and how they affect AI applications.
Collapse
Affiliation(s)
- Sina Asaadi
- Veterans Administration Loma Linda Healthcare System, 11201 Benton Street, Mail Code 112, Loma Linda, CA 92357
| | | | - Mary M Lee
- Veterans Administration Loma Linda Healthcare System, 11201 Benton Street, Mail Code 112, Loma Linda, CA 92357
| | - Joe Luis Pantoja
- Veterans Administration Loma Linda Healthcare System, 11201 Benton Street, Mail Code 112, Loma Linda, CA 92357.
| |
Collapse
|
7
|
Auto-MyIn: Automatic diagnosis of myocardial infarction via multiple GLCMs, CNNs, and SVMs. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
8
|
Attallah O, Aslan MF, Sabanci K. A Framework for Lung and Colon Cancer Diagnosis via Lightweight Deep Learning Models and Transformation Methods. Diagnostics (Basel) 2022; 12:diagnostics12122926. [PMID: 36552933 PMCID: PMC9776637 DOI: 10.3390/diagnostics12122926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 11/19/2022] [Accepted: 11/22/2022] [Indexed: 11/25/2022] Open
Abstract
Among the leading causes of mortality and morbidity in people are lung and colon cancers. They may develop concurrently in organs and negatively impact human life. If cancer is not diagnosed in its early stages, there is a great likelihood that it will spread to the two organs. The histopathological detection of such malignancies is one of the most crucial components of effective treatment. Although the process is lengthy and complex, deep learning (DL) techniques have made it feasible to complete it more quickly and accurately, enabling researchers to study a lot more patients in a short time period and for a lot less cost. Earlier studies relied on DL models that require great computational ability and resources. Most of them depended on individual DL models to extract features of high dimension or to perform diagnoses. However, in this study, a framework based on multiple lightweight DL models is proposed for the early detection of lung and colon cancers. The framework utilizes several transformation methods that perform feature reduction and provide a better representation of the data. In this context, histopathology scans are fed into the ShuffleNet, MobileNet, and SqueezeNet models. The number of deep features acquired from these models is subsequently reduced using principal component analysis (PCA) and fast Walsh-Hadamard transform (FHWT) techniques. Following that, discrete wavelet transform (DWT) is used to fuse the FWHT's reduced features obtained from the three DL models. Additionally, the three DL models' PCA features are concatenated. Finally, the diminished features as a result of PCA and FHWT-DWT reduction and fusion processes are fed to four distinct machine learning algorithms, reaching the highest accuracy of 99.6%. The results obtained using the proposed framework based on lightweight DL models show that it can distinguish lung and colon cancer variants with a lower number of features and less computational complexity compared to existing methods. They also prove that utilizing transformation methods to reduce features can offer a superior interpretation of the data, thus improving the diagnosis procedure.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
- Correspondence:
| | - Muhammet Fatih Aslan
- Department of Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, 70100 Karaman, Turkey
| | - Kadir Sabanci
- Department of Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, 70100 Karaman, Turkey
| |
Collapse
|
9
|
Attallah O, Samir A. A wavelet-based deep learning pipeline for efficient COVID-19 diagnosis via CT slices. Appl Soft Comput 2022; 128:109401. [PMID: 35919069 PMCID: PMC9335861 DOI: 10.1016/j.asoc.2022.109401] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 05/20/2022] [Accepted: 07/25/2022] [Indexed: 12/30/2022]
Abstract
The quick diagnosis of the novel coronavirus (COVID-19) disease is vital to prevent its propagation and improve therapeutic outcomes. Computed tomography (CT) is believed to be an effective tool for diagnosing COVID-19, however, the CT scan contains hundreds of slices that are complex to be analyzed and could cause delays in diagnosis. Artificial intelligence (AI) especially deep learning (DL), could facilitate and speed up COVID-19 diagnosis from such scans. Several studies employed DL approaches based on 2D CT images from a single view, nevertheless, 3D multiview CT slices demonstrated an excellent ability to enhance the efficiency of COVID-19 diagnosis. The majority of DL-based studies utilized the spatial information of the original CT images to train their models, though, using spectral–temporal information could improve the detection of COVID-19. This article proposes a DL-based pipeline called CoviWavNet for the automatic diagnosis of COVID-19. CoviWavNet uses a 3D multiview dataset called OMNIAHCOV. Initially, it analyzes the CT slices using multilevel discrete wavelet decomposition (DWT) and then uses the heatmaps of the approximation levels to train three ResNet CNN models. These ResNets use the spectral–temporal information of such images to perform classification. Subsequently, it investigates whether the combination of spatial information with spectral–temporal information could improve the diagnostic accuracy of COVID-19. For this purpose, it extracts deep spectral–temporal features from such ResNets using transfer learning and integrates them with deep spatial features extracted from the same ResNets trained with the original CT slices. Then, it utilizes a feature selection step to reduce the dimension of such integrated features and use them as inputs to three support vector machine (SVM) classifiers. To further validate the performance of CoviWavNet, a publicly available benchmark dataset called SARS-COV-2-CT-Scan is employed. The results of CoviWavNet have demonstrated that using the spectral–temporal information of the DWT heatmap images to train the ResNets is superior to utilizing the spatial information of the original CT images. Furthermore, integrating deep spectral–temporal features with deep spatial features has enhanced the classification accuracy of the three SVM classifiers reaching a final accuracy of 99.33% and 99.7% for the OMNIAHCOV and SARS-COV-2-CT-Scan datasets respectively. These accuracies verify the outstanding performance of CoviWavNet compared to other related studies. Thus, CoviWavNet can help radiologists in the rapid and accurate diagnosis of COVID-19 diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| | - Ahmed Samir
- Department of Radiodiagnosis, Faculty of Medicine, University of Alexandria, Egypt
| |
Collapse
|
10
|
Caradu C, Pouncey AL, Lakhlifi E, Brunet C, Bérard X, Ducasse E. Fully automatic volume segmentation using deep learning approaches to assess aneurysmal sac evolution after infra-renal endovascular aortic repair. J Vasc Surg 2022; 76:620-630.e3. [PMID: 35618195 DOI: 10.1016/j.jvs.2022.03.891] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 03/29/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Endovascular aortic repair (EVAR) surveillance relies on serial measurements of maximal diameter despite significant inter- and intra-observer variability. Volumetric measurements are more sensitive but general use is hampered by the time required for their implementation. An innovative fully automated software (PRAEVAorta® from Nurea), using artificial intelligence (AI), previously demonstrated fast and robust detection of infra-renal abdominal aortic aneurysm's (AAA) characteristics on pre-operative imaging. This study aimed to assess the robustness of these data on post-EVAR computed tomography (CT) scans. METHODS Comparison was made between fully automatic and semi-automatic segmentation manually corrected by a senior surgeon on a dataset of 48 patients (48 early post-EVAR CT scans with 6466 slices, and a total of 101 follow-up CT scans with 13708 slices). RESULTS The analyses confirmed an excellent correlation of post-EVAR volumes and surfaces, as well as, proximal neck and maximum aneurysm diameters measured with the fully automatic and manually corrected segmentation methods (Pearson's coefficient correlation >.99, p<.0001). Comparison between the fully automatic and manually corrected segmentation method revealed a mean Dice Similarity Coefficient of 0.950±0.015, Jaccard index of 0.906±0.028, Sensitivity of 0.929±0.028, Specificity of 0.965±0.016, Volumetric Similarity (VS) of 0.973±0.018 and mean Hausdorff Distance/slice of 8.7±10.8mm. The mean VS reached 0.873±0.100 for the lumen and 0.903±0.091 for the thrombus. The segmentation time was 9 times faster with the fully automatic method (2.5 vs 22 min/patient with the manually corrected method; p<.0001). Preliminary analysis also demonstrated that a diameter increase of 2mm can actually represent >5% volume increase. CONCLUSION PRAEVAorta® enables a fast, reproducible, and fully automated analysis of post-EVAR AAA sac and neck characteristics, with comparison between different time points. It could become a crucial adjunct for EVAR follow-up through early detection of sac evolution, which may reduce the risk of secondary rupture.
Collapse
Affiliation(s)
- Caroline Caradu
- Bordeaux University Hospital, department of vascular surgery, 33000 Bordeaux, France
| | | | - Emilie Lakhlifi
- Bordeaux University Hospital, department of vascular surgery, 33000 Bordeaux, France
| | - Céline Brunet
- Bordeaux University Hospital, department of vascular surgery, 33000 Bordeaux, France
| | - Xavier Bérard
- Bordeaux University Hospital, department of vascular surgery, 33000 Bordeaux, France
| | - Eric Ducasse
- Bordeaux University Hospital, department of vascular surgery, 33000 Bordeaux, France.
| |
Collapse
|
11
|
Attallah O. An Intelligent ECG-Based Tool for Diagnosing COVID-19 via Ensemble Deep Learning Techniques. BIOSENSORS 2022; 12:bios12050299. [PMID: 35624600 PMCID: PMC9138764 DOI: 10.3390/bios12050299] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/06/2022] [Accepted: 04/24/2022] [Indexed: 06/01/2023]
Abstract
Diagnosing COVID-19 accurately and rapidly is vital to control its quick spread, lessen lockdown restrictions, and decrease the workload on healthcare structures. The present tools to detect COVID-19 experience numerous shortcomings. Therefore, novel diagnostic tools are to be examined to enhance diagnostic accuracy and avoid the limitations of these tools. Earlier studies indicated multiple structures of cardiovascular alterations in COVID-19 cases which motivated the realization of using ECG data as a tool for diagnosing the novel coronavirus. This study introduced a novel automated diagnostic tool based on ECG data to diagnose COVID-19. The introduced tool utilizes ten deep learning (DL) models of various architectures. It obtains significant features from the last fully connected layer of each DL model and then combines them. Afterward, the tool presents a hybrid feature selection based on the chi-square test and sequential search to select significant features. Finally, it employs several machine learning classifiers to perform two classification levels. A binary level to differentiate between normal and COVID-19 cases, and a multiclass to discriminate COVID-19 cases from normal and other cardiac complications. The proposed tool reached an accuracy of 98.2% and 91.6% for binary and multiclass levels, respectively. This performance indicates that the ECG could be used as an alternative means of diagnosis of COVID-19.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
12
|
Attallah O. ECG-BiCoNet: An ECG-based pipeline for COVID-19 diagnosis using Bi-Layers of deep features integration. Comput Biol Med 2022; 142:105210. [PMID: 35026574 PMCID: PMC8730786 DOI: 10.1016/j.compbiomed.2022.105210] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 12/29/2022]
Abstract
The accurate and speedy detection of COVID-19 is essential to avert the fast propagation of the virus, alleviate lockdown constraints and diminish the burden on health organizations. Currently, the methods used to diagnose COVID-19 have several limitations, thus new techniques need to be investigated to improve the diagnosis and overcome these limitations. Taking into consideration the great benefits of electrocardiogram (ECG) applications, this paper proposes a new pipeline called ECG-BiCoNet to investigate the potential of using ECG data for diagnosing COVID-19. ECG-BiCoNet employs five deep learning models of distinct structural design. ECG-BiCoNet extracts two levels of features from two different layers of each deep learning technique. Features mined from higher layers are fused using discrete wavelet transform and then integrated with lower-layers features. Afterward, a feature selection approach is utilized. Finally, an ensemble classification system is built to merge predictions of three machine learning classifiers. ECG-BiCoNet accomplishes two classification categories, binary and multiclass. The results of ECG-BiCoNet present a promising COVID-19 performance with an accuracy of 98.8% and 91.73% for binary and multiclass classification categories. These results verify that ECG data may be used to diagnose COVID-19 which can help clinicians in the automatic diagnosis and overcome limitations of manual diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 1029, Egypt.
| |
Collapse
|
13
|
Attallah O. A deep learning-based diagnostic tool for identifying various diseases via facial images. Digit Health 2022; 8:20552076221124432. [PMID: 36105626 PMCID: PMC9465585 DOI: 10.1177/20552076221124432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 08/18/2022] [Indexed: 11/16/2022] Open
Abstract
With the current health crisis caused by the COVID-19 pandemic, patients have
become more anxious about infection, so they prefer not to have direct contact
with doctors or clinicians. Lately, medical scientists have confirmed that
several diseases exhibit corresponding specific features on the face the face.
Recent studies have indicated that computer-aided facial diagnosis can be a
promising tool for the automatic diagnosis and screening of diseases from facial
images. However, few of these studies used deep learning (DL) techniques. Most
of them focused on detecting a single disease, using handcrafted feature
extraction methods and conventional machine learning techniques based on
individual classifiers trained on small and private datasets using images taken
from a controlled environment. This study proposes a novel computer-aided facial
diagnosis system called FaceDisNet that uses a new public dataset based on
images taken from an unconstrained environment and could be employed for
forthcoming comparisons. It detects single and multiple diseases. FaceDisNet is
constructed by integrating several spatial deep features from convolutional
neural networks of various architectures. It does not depend only on spatial
features but also extracts spatial-spectral features. FaceDisNet searches for
the fused spatial-spectral feature set that has the greatest impact on the
classification. It employs two feature selection techniques to reduce the large
dimension of features resulting from feature fusion. Finally, it builds an
ensemble classifier based on stacking to perform classification. The performance
of FaceDisNet verifies its ability to diagnose single and multiple diseases.
FaceDisNet achieved a maximum accuracy of 98.57% and 98% after the ensemble
classification and feature selection steps for binary and multiclass
classification categories. These results prove that FaceDisNet is a reliable
tool and could be employed to avoid the difficulties and complications of manual
diagnosis. Also, it can help physicians achieve accurate diagnoses without the
need for physical contact with the patients.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
14
|
Attallah O. DIAROP: Automated Deep Learning-Based Diagnostic Tool for Retinopathy of Prematurity. Diagnostics (Basel) 2021; 11:2034. [PMID: 34829380 PMCID: PMC8620568 DOI: 10.3390/diagnostics11112034] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 09/24/2021] [Accepted: 11/01/2021] [Indexed: 12/12/2022] Open
Abstract
Retinopathy of Prematurity (ROP) affects preterm neonates and could cause blindness. Deep Learning (DL) can assist ophthalmologists in the diagnosis of ROP. This paper proposes an automated and reliable diagnostic tool based on DL techniques called DIAROP to support the ophthalmologic diagnosis of ROP. It extracts significant features by first obtaining spatial features from the four Convolution Neural Networks (CNNs) DL techniques using transfer learning and then applying Fast Walsh Hadamard Transform (FWHT) to integrate these features. Moreover, DIAROP explores the best-integrated features extracted from the CNNs that influence its diagnostic capability. The results of DIAROP indicate that DIAROP achieved an accuracy of 93.2% and an area under receiving operating characteristic curve (AUC) of 0.98. Furthermore, DIAROP performance is compared with recent ROP diagnostic tools. Its promising performance shows that DIAROP may assist the ophthalmologic diagnosis of ROP.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
15
|
Intelligent Dermatologist Tool for Classifying Multiple Skin Cancer Subtypes by Incorporating Manifold Radiomics Features Categories. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:7192016. [PMID: 34621146 PMCID: PMC8457955 DOI: 10.1155/2021/7192016] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 08/20/2021] [Accepted: 09/01/2021] [Indexed: 02/06/2023]
Abstract
The rates of skin cancer (SC) are rising every year and becoming a critical health issue worldwide. SC's early and accurate diagnosis is the key procedure to reduce these rates and improve survivability. However, the manual diagnosis is exhausting, complicated, expensive, prone to diagnostic error, and highly dependent on the dermatologist's experience and abilities. Thus, there is a vital need to create automated dermatologist tools that are capable of accurately classifying SC subclasses. Recently, artificial intelligence (AI) techniques including machine learning (ML) and deep learning (DL) have verified the success of computer-assisted dermatologist tools in the automatic diagnosis and detection of SC diseases. Previous AI-based dermatologist tools are based on features which are either high-level features based on DL methods or low-level features based on handcrafted operations. Most of them were constructed for binary classification of SC. This study proposes an intelligent dermatologist tool to accurately diagnose multiple skin lesions automatically. This tool incorporates manifold radiomics features categories involving high-level features such as ResNet-50, DenseNet-201, and DarkNet-53 and low-level features including discrete wavelet transform (DWT) and local binary pattern (LBP). The results of the proposed intelligent tool prove that merging manifold features of different categories has a high influence on the classification accuracy. Moreover, these results are superior to those obtained by other related AI-based dermatologist tools. Therefore, the proposed intelligent tool can be used by dermatologists to help them in the accurate diagnosis of the SC subcategory. It can also overcome manual diagnosis limitations, reduce the rates of infection, and enhance survival rates.
Collapse
|
16
|
Attallah O. CoMB-Deep: Composite Deep Learning-Based Pipeline for Classifying Childhood Medulloblastoma and Its Classes. Front Neuroinform 2021; 15:663592. [PMID: 34122031 PMCID: PMC8193683 DOI: 10.3389/fninf.2021.663592] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 04/26/2021] [Indexed: 12/28/2022] Open
Abstract
Childhood medulloblastoma (MB) is a threatening malignant tumor affecting children all over the globe. It is believed to be the foremost common pediatric brain tumor causing death. Early and accurate classification of childhood MB and its classes are of great importance to help doctors choose the suitable treatment and observation plan, avoid tumor progression, and lower death rates. The current gold standard for diagnosing MB is the histopathology of biopsy samples. However, manual analysis of such images is complicated, costly, time-consuming, and highly dependent on the expertise and skills of pathologists, which might cause inaccurate results. This study aims to introduce a reliable computer-assisted pipeline called CoMB-Deep to automatically classify MB and its classes with high accuracy from histopathological images. This key challenge of the study is the lack of childhood MB datasets, especially its four categories (defined by the WHO) and the inadequate related studies. All relevant works were based on either deep learning (DL) or textural analysis feature extractions. Also, such studies employed distinct features to accomplish the classification procedure. Besides, most of them only extracted spatial features. Nevertheless, CoMB-Deep blends the advantages of textural analysis feature extraction techniques and DL approaches. The CoMB-Deep consists of a composite of DL techniques. Initially, it extracts deep spatial features from 10 convolutional neural networks (CNNs). It then performs a feature fusion step using discrete wavelet transform (DWT), a texture analysis method capable of reducing the dimension of fused features. Next, the CoMB-Deep explores the best combination of fused features, enhancing the performance of the classification process using two search strategies. Afterward, it employs two feature selection techniques on the fused feature sets selected in the previous step. A bi-directional long-short term memory (Bi-LSTM) network; a DL-based approach that is utilized for the classification phase. CoMB-Deep maintains two classification categories: binary category for distinguishing between the abnormal and normal cases and multi-class category to identify the subclasses of MB. The results of the CoMB-Deep for both classification categories prove that it is reliable. The results also indicate that the feature sets selected using both search strategies have enhanced the performance of Bi-LSTM compared to individual spatial deep features. CoMB-Deep is compared to related studies to verify its competitiveness, and this comparison confirmed its robustness and outperformance. Hence, CoMB-Deep can help pathologists perform accurate diagnoses, reduce misdiagnosis risks that could occur with manual diagnosis, accelerate the classification procedure, and decrease diagnosis costs.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
17
|
Elfanagely O, Toyoda Y, Othman S, Mellia JA, Basta M, Liu T, Kording K, Ungar L, Fischer JP. Machine Learning and Surgical Outcomes Prediction: A Systematic Review. J Surg Res 2021; 264:346-361. [PMID: 33848833 DOI: 10.1016/j.jss.2021.02.045] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 02/13/2021] [Accepted: 02/27/2021] [Indexed: 12/20/2022]
Abstract
BACKGROUND Machine learning (ML) has garnered increasing attention as a means to quantitatively analyze the growing and complex medical data to improve individualized patient care. We herein aim to critically examine the current state of ML in predicting surgical outcomes, evaluate the quality of currently available research, and propose areas of improvement for future uses of ML in surgery. METHODS A systematic review was conducted in accordance with the Preferred Reporting Items for a Systematic Review and Meta-Analysis (PRISMA) checklist. PubMed, MEDLINE, and Embase databases were reviewed under search syntax "machine learning" and "surgery" for papers published between 2015 and 2020. RESULTS Of the initial 2677 studies, 45 papers met inclusion and exclusion criteria. Fourteen different subspecialties were represented with neurosurgery being most common. The most frequently used ML algorithms were random forest (n = 19), artificial neural network (n = 17), and logistic regression (n = 17). Common outcomes included postoperative mortality, complications, patient reported quality of life and pain improvement. All studies which compared ML algorithms to conventional studies which used area under the curve (AUC) to measure accuracy found improved outcome prediction with ML models. CONCLUSIONS While still in its early stages, ML models offer surgeons an opportunity to capitalize on the myriad of clinical data available and improve individualized patient care. Limitations included heterogeneous outcome and imperfect quality of some of the papers. We therefore urge future research to agree upon methods of outcome reporting and require basic quality standards.
Collapse
Affiliation(s)
- Omar Elfanagely
- Division of Plastic Surgery, Department of Surgery, University of Pennsylvania, Philadelphia, Pennsylvania.
| | - Yoshiko Toyoda
- Division of Plastic Surgery, Department of Surgery, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Sammy Othman
- Division of Plastic Surgery, Department of Surgery, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Joseph A Mellia
- Division of Plastic Surgery, Department of Surgery, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Marten Basta
- Department of Plastic and Reconstructive Surgery, Brown University, Providence, Rhode Island
| | - Tony Liu
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Konrad Kording
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania; Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Lyle Ungar
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, Pennsylvania
| | - John P Fischer
- Division of Plastic Surgery, Department of Surgery, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
18
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
19
|
Attallah O, Anwar F, Ghanem NM, Ismail MA. Histo-CADx: duo cascaded fusion stages for breast cancer diagnosis from histopathological images. PeerJ Comput Sci 2021; 7:e493. [PMID: 33987459 PMCID: PMC8093954 DOI: 10.7717/peerj-cs.493] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/26/2021] [Indexed: 05/06/2023]
Abstract
Breast cancer (BC) is one of the most common types of cancer that affects females worldwide. It may lead to irreversible complications and even death due to late diagnosis and treatment. The pathological analysis is considered the gold standard for BC detection, but it is a challenging task. Automatic diagnosis of BC could reduce death rates, by creating a computer aided diagnosis (CADx) system capable of accurately identifying BC at an early stage and decreasing the time consumed by pathologists during examinations. This paper proposes a novel CADx system named Histo-CADx for the automatic diagnosis of BC. Most related studies were based on individual deep learning methods. Also, studies did not examine the influence of fusing features from multiple CNNs and handcrafted features. In addition, related studies did not investigate the best combination of fused features that influence the performance of the CADx. Therefore, Histo-CADx is based on two stages of fusion. The first fusion stage involves the investigation of the impact of fusing several deep learning (DL) techniques with handcrafted feature extraction methods using the auto-encoder DL method. This stage also examines and searches for a suitable set of fused features that could improve the performance of Histo-CADx. The second fusion stage constructs a multiple classifier system (MCS) for fusing outputs from three classifiers, to further improve the accuracy of the proposed Histo-CADx. The performance of Histo-CADx is evaluated using two public datasets; specifically, the BreakHis and the ICIAR 2018 datasets. The results from the analysis of both datasets verified that the two fusion stages of Histo-CADx successfully improved the accuracy of the CADx compared to CADx constructed with individual features. Furthermore, using the auto-encoder for the fusion process has reduced the computation cost of the system. Moreover, the results after the two fusion stages confirmed that Histo-CADx is reliable and has the capacity of classifying BC more accurately compared to other latest studies. Consequently, it can be used by pathologists to help them in the accurate diagnosis of BC. In addition, it can decrease the time and effort needed by medical experts during the examination.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Alexandria, Egypt
| | - Fatma Anwar
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| | - Nagia M. Ghanem
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| | - Mohamed A. Ismail
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| |
Collapse
|
20
|
A BCI System Based on Motor Imagery for Assisting People with Motor Deficiencies in the Limbs. Brain Sci 2020; 10:brainsci10110864. [PMID: 33212777 PMCID: PMC7697603 DOI: 10.3390/brainsci10110864] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 10/27/2020] [Accepted: 11/06/2020] [Indexed: 12/26/2022] Open
Abstract
Motor deficiencies constitute a significant problem affecting millions of people worldwide. Such people suffer from a debility in daily functioning, which may lead to decreased and incoherence in daily routines and deteriorate their quality of life (QoL). Thus, there is an essential need for assistive systems to help those people achieve their daily actions and enhance their overall QoL. This study proposes a novel brain–computer interface (BCI) system for assisting people with limb motor disabilities in performing their daily life activities by using their brain signals to control assistive devices. The extraction of useful features is vital for an efficient BCI system. Therefore, the proposed system consists of a hybrid feature set that feeds into three machine-learning (ML) classifiers to classify motor Imagery (MI) tasks. This hybrid feature selection (FS) system is practical, real-time, and an efficient BCI with low computation cost. We investigate different combinations of channels to select the combination that has the highest impact on performance. The results indicate that the highest achieved accuracies using a support vector machine (SVM) classifier are 93.46% and 86.0% for the BCI competition III–IVa dataset and the autocalibration and recurrent adaptation dataset, respectively. These datasets are used to test the performance of the proposed BCI. Also, we verify the effectiveness of the proposed BCI by comparing its performance with recent studies. We show that the proposed system is accurate and efficient. Future work can apply the proposed system to individuals with limb motor disabilities to assist them and test their capability to improve their QoL. Moreover, the forthcoming work can examine the system’s performance in controlling assistive devices such as wheelchairs or artificial limbs.
Collapse
|
21
|
Ragab DA, Attallah O. FUSI-CAD: Coronavirus (COVID-19) diagnosis based on the fusion of CNNs and handcrafted features. PeerJ Comput Sci 2020; 6:e306. [PMID: 33816957 PMCID: PMC7924442 DOI: 10.7717/peerj-cs.306] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 09/30/2020] [Indexed: 05/19/2023]
Abstract
The precise and rapid diagnosis of coronavirus (COVID-19) at the very primary stage helps doctors to manage patients in high workload conditions. In addition, it prevents the spread of this pandemic virus. Computer-aided diagnosis (CAD) based on artificial intelligence (AI) techniques can be used to distinguish between COVID-19 and non-COVID-19 from the computed tomography (CT) imaging. Furthermore, the CAD systems are capable of delivering an accurate faster COVID-19 diagnosis, which consequently saves time for the disease control and provides an efficient diagnosis compared to laboratory tests. In this study, a novel CAD system called FUSI-CAD based on AI techniques is proposed. Almost all the methods in the literature are based on individual convolutional neural networks (CNN). Consequently, the FUSI-CAD system is based on the fusion of multiple different CNN architectures with three handcrafted features including statistical features and textural analysis features such as discrete wavelet transform (DWT), and the grey level co-occurrence matrix (GLCM) which were not previously utilized in coronavirus diagnosis. The SARS-CoV-2 CT-scan dataset is used to test the performance of the proposed FUSI-CAD. The results show that the proposed system could accurately differentiate between COVID-19 and non-COVID-19 images, as the accuracy achieved is 99%. Additionally, the system proved to be reliable as well. This is because the sensitivity, specificity, and precision attained to 99%. In addition, the diagnostics odds ratio (DOR) is ≥ 100. Furthermore, the results are compared with recent related studies based on the same dataset. The comparison verifies the competence of the proposed FUSI-CAD over the other related CAD systems. Thus, the novel FUSI-CAD system can be employed in real diagnostic scenarios for achieving accurate testing for COVID-19 and avoiding human misdiagnosis that might exist due to human fatigue. It can also reduce the time and exertion made by the radiologists during the examination process.
Collapse
Affiliation(s)
- Dina A. Ragab
- Electronics and Communications Engineering Department, Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Alexandria, Egypt
| | - Omneya Attallah
- Electronics and Communications Engineering Department, Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Alexandria, Egypt
| |
Collapse
|
22
|
Raffort J, Adam C, Carrier M, Ballaith A, Coscas R, Jean-Baptiste E, Hassen-Khodja R, Chakfé N, Lareyre F. Artificial intelligence in abdominal aortic aneurysm. J Vasc Surg 2020; 72:321-333.e1. [PMID: 32093909 DOI: 10.1016/j.jvs.2019.12.026] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 12/07/2019] [Indexed: 12/11/2022]
Abstract
OBJECTIVE Abdominal aortic aneurysm (AAA) is a life-threatening disease, and the only curative treatment relies on open or endovascular repair. The decision to treat relies on the evaluation of the risk of AAA growth and rupture, which can be difficult to assess in practice. Artificial intelligence (AI) has revealed new insights into the management of cardiovascular diseases, but its application in AAA has so far been poorly described. The aim of this review was to summarize the current knowledge on the potential applications of AI in patients with AAA. METHODS A comprehensive literature review was performed. The MEDLINE database was searched according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The search strategy used a combination of keywords and included studies using AI in patients with AAA published between May 2019 and January 2000. Two authors independently screened titles and abstracts and performed data extraction. The search of published literature identified 34 studies with distinct methodologies, aims, and study designs. RESULTS AI was used in patients with AAA to improve image segmentation and for quantitative analysis and characterization of AAA morphology, geometry, and fluid dynamics. AI allowed computation of large data sets to identify patterns that may be predictive of AAA growth and rupture. Several predictive and prognostic programs were also developed to assess patients' postoperative outcomes, including mortality and complications after endovascular aneurysm repair. CONCLUSIONS AI represents a useful tool in the interpretation and analysis of AAA imaging by enabling automatic quantitative measurements and morphologic characterization. It could be used to help surgeons in preoperative planning. AI-driven data management may lead to the development of computational programs for the prediction of AAA evolution and risk of rupture as well as postoperative outcomes. AI could also be used to better evaluate the indications and types of surgical treatment and to plan the postoperative follow-up. AI represents an attractive tool for decision-making and may facilitate development of personalized therapeutic approaches for patients with AAA.
Collapse
Affiliation(s)
- Juliette Raffort
- Clinical Chemistry Laboratory, University Hospital of Nice, Nice, France; Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France
| | - Cédric Adam
- Laboratory of Applied Mathematics and Computer Science (MICS), CentraleSupélec, Université Paris-Saclay, Paris, France
| | - Marion Carrier
- Laboratory of Applied Mathematics and Computer Science (MICS), CentraleSupélec, Université Paris-Saclay, Paris, France
| | - Ali Ballaith
- Department of Vascular Surgery, University Hospital of Nice, Nice, France
| | - Raphael Coscas
- Department of Vascular Surgery, Ambroise Paré University Hospital, Assistance Publique-Hôpitaux de Paris (AP-HP), Boulogne, France; Inserm U1018 Team 5, Versailles-Saint-Quentin et Paris-Saclay Universities, Versailles, France
| | - Elixène Jean-Baptiste
- Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France; Department of Vascular Surgery, University Hospital of Nice, Nice, France
| | - Réda Hassen-Khodja
- Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France; Department of Vascular Surgery, University Hospital of Nice, Nice, France
| | - Nabil Chakfé
- Department of Vascular Surgery and Kidney Transplantation, University Hospital of Strasbourg, and GEPROVAS, Strasbourg, France
| | - Fabien Lareyre
- Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France; Department of Vascular Surgery, University Hospital of Nice, Nice, France.
| |
Collapse
|
23
|
Breast Cancer Diagnosis Using an Efficient CAD System Based on Multiple Classifiers. Diagnostics (Basel) 2019; 9:diagnostics9040165. [PMID: 31717809 PMCID: PMC6963468 DOI: 10.3390/diagnostics9040165] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Revised: 10/21/2019] [Accepted: 10/24/2019] [Indexed: 11/17/2022] Open
Abstract
Breast cancer is one of the major health issues across the world. In this study, a new computer-aided detection (CAD) system is introduced. First, the mammogram images were enhanced to increase the contrast. Second, the pectoral muscle was eliminated and the breast was suppressed from the mammogram. Afterward, some statistical features were extracted. Next, k-nearest neighbor (k-NN) and decision trees classifiers were used to classify the normal and abnormal lesions. Moreover, multiple classifier systems (MCS) was constructed as it usually improves the classification results. The MCS has two structures, cascaded and parallel structures. Finally, two wrapper feature selection (FS) approaches were applied to identify those features, which influence classification accuracy. The two data sets (1) the mammographic image analysis society digital mammogram database (MIAS) and (2) the digital mammography dream challenge were combined together to test the CAD system proposed. The highest accuracy achieved with the proposed CAD system before FS was 99.7% using the Adaboosting of the J48 decision tree classifiers. The highest accuracy after FS was 100%, which was achieved with k-NN classifier. Moreover, the area under the curve (AUC) of the receiver operating characteristic (ROC) curve was equal to 1.0. The results showed that the proposed CAD system was able to accurately classify normal and abnormal lesions in mammogram samples.
Collapse
|