1
|
Alyami J. Computer-aided analysis of radiological images for cancer diagnosis: performance analysis on benchmark datasets, challenges, and directions. EJNMMI REPORTS 2024; 8:7. [PMID: 38748374 PMCID: PMC10982256 DOI: 10.1186/s41824-024-00195-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 02/05/2024] [Indexed: 05/19/2024]
Abstract
Radiological image analysis using machine learning has been extensively applied to enhance biopsy diagnosis accuracy and assist radiologists with precise cures. With improvements in the medical industry and its technology, computer-aided diagnosis (CAD) systems have been essential in detecting early cancer signs in patients that could not be observed physically, exclusive of introducing errors. CAD is a detection system that combines artificially intelligent techniques with image processing applications thru computer vision. Several manual procedures are reported in state of the art for cancer diagnosis. Still, they are costly, time-consuming and diagnose cancer in late stages such as CT scans, radiography, and MRI scan. In this research, numerous state-of-the-art approaches on multi-organs detection using clinical practices are evaluated, such as cancer, neurological, psychiatric, cardiovascular and abdominal imaging. Additionally, numerous sound approaches are clustered together and their results are assessed and compared on benchmark datasets. Standard metrics such as accuracy, sensitivity, specificity and false-positive rate are employed to check the validity of the current models reported in the literature. Finally, existing issues are highlighted and possible directions for future work are also suggested.
Collapse
Affiliation(s)
- Jaber Alyami
- Department of Radiological Sciences, Faculty of Applied Medical Sciences, King Abdulaziz University, 21589, Jeddah, Saudi Arabia.
- King Fahd Medical Research Center, King Abdulaziz University, 21589, Jeddah, Saudi Arabia.
- Smart Medical Imaging Research Group, King Abdulaziz University, 21589, Jeddah, Saudi Arabia.
- Medical Imaging and Artificial Intelligence Research Unit, Center of Modern Mathematical Sciences and its Applications, King Abdulaziz University, 21589, Jeddah, Saudi Arabia.
| |
Collapse
|
2
|
Mehringer W, Stoeve M, Krauss D, Ring M, Steussloff F, Güttes M, Zott J, Hohberger B, Michelson G, Eskofier B. Virtual reality for assessing stereopsis performance and eye characteristics in Post-COVID. Sci Rep 2023; 13:13167. [PMID: 37574496 PMCID: PMC10423723 DOI: 10.1038/s41598-023-40263-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 08/08/2023] [Indexed: 08/15/2023] Open
Abstract
In 2019, we faced a pandemic due to the coronavirus disease (COVID-19), with millions of confirmed cases and reported deaths. Even in recovered patients, symptoms can be persistent over weeks, termed Post-COVID. In addition to common symptoms of fatigue, muscle weakness, and cognitive impairments, visual impairments have been reported. Automatic classification of COVID and Post-COVID is researched based on blood samples and radiation-based procedures, among others. However, a symptom-oriented assessment for visual impairments is still missing. Thus, we propose a Virtual Reality environment in which stereoscopic stimuli are displayed to test the patient's stereopsis performance. While performing the visual tasks, the eyes' gaze and pupil diameter are recorded. We collected data from 15 controls and 20 Post-COVID patients in a study. Therefrom, we extracted features of three main data groups, stereopsis performance, pupil diameter, and gaze behavior, and trained various classifiers. The Random Forest classifier achieved the best result with 71% accuracy. The recorded data support the classification result showing worse stereopsis performance and eye movement alterations in Post-COVID. There are limitations in the study design, comprising a small sample size and the use of an eye tracking system.
Collapse
Affiliation(s)
- Wolfgang Mehringer
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Bavaria, Germany.
| | - Maike Stoeve
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Bavaria, Germany
| | - Daniel Krauss
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Bavaria, Germany
| | - Matthias Ring
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Bavaria, Germany
| | - Fritz Steussloff
- Department of Ophthalmology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Moritz Güttes
- Department of Ophthalmology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Julia Zott
- Department of Ophthalmology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Bettina Hohberger
- Department of Ophthalmology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Georg Michelson
- Department of Ophthalmology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Talkingeyes & More GmbH, 91052, Erlangen, Bavaria, Germany
| | - Bjoern Eskofier
- Machine Learning and Data Analytics Lab (MaD Lab), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052, Erlangen, Bavaria, Germany
| |
Collapse
|
3
|
Jithendra T, Sharief Basha S. A Hybridized Machine Learning Approach for Predicting COVID-19 Using Adaptive Neuro-Fuzzy Inference System and Reptile Search Algorithm. Diagnostics (Basel) 2023; 13:diagnostics13091641. [PMID: 37175032 PMCID: PMC10178244 DOI: 10.3390/diagnostics13091641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/02/2023] [Accepted: 03/08/2023] [Indexed: 05/15/2023] Open
Abstract
This research is aimed to escalate Adaptive Neuro-Fuzzy Inference System (ANFIS) functioning in order to ensure the veracity of existing time-series modeling. The COVID-19 pandemic has been a global threat for the past three years. Therefore, advanced forecasting of confirmed infection cases is extremely essential to alleviate the crisis brought out by COVID-19. An adaptive neuro-fuzzy inference system-reptile search algorithm (ANFIS-RSA) is developed to effectively anticipate COVID-19 cases. The proposed model integrates a machine-learning model (ANFIS) with a nature-inspired Reptile Search Algorithm (RSA). The RSA technique is used to modulate the parameters in order to improve the ANFIS modeling. Since the performance of the ANFIS model is dependent on optimizing parameters, the statistics of infected cases in China and India were employed through data obtained from WHO reports. To ensure the accuracy of our estimations, corresponding error indicators such as RMSE, RMSRE, MAE, and MAPE were evaluated using the coefficient of determination (R2). The recommended approach employed on the China dataset was compared with other upgraded ANFIS methods to identify the best error metrics, resulting in an R2 value of 0.9775. ANFIS-CEBAS and Flower Pollination Algorithm and Salp Swarm Algorithm (FPASSA-ANFIS) attained values of 0.9645 and 0.9763, respectively. Furthermore, the ANFIS-RSA technique was used on the India dataset to examine its efficiency and acquired the best R2 value (0.98). Consequently, the suggested technique was found to be more beneficial for high-precision forecasting of COVID-19 on time-series data.
Collapse
Affiliation(s)
- Thandra Jithendra
- Department of Mathematics, School of Advanced Sciences, Vellore Institute of Technology, Vellore 632014, India
| | - Shaik Sharief Basha
- Department of Mathematics, School of Advanced Sciences, Vellore Institute of Technology, Vellore 632014, India
| |
Collapse
|
4
|
Tumor Localization and Classification from MRI of Brain using Deep Convolution Neural Network and Salp Swarm Algorithm. Cognit Comput 2023. [DOI: 10.1007/s12559-022-10096-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
5
|
Introducing the Effective Features Using the Particle Swarm Optimization Algorithm to Increase Accuracy in Determining the Volume Percentages of Three-Phase Flows. Processes (Basel) 2023. [DOI: 10.3390/pr11010236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
What is presented in this research is an intelligent system for detecting the volume percentage of three-phase fluids passing through oil pipes. The structure of the detection system consists of an X-ray tube, a Pyrex galss pipe, and two sodium iodide detectors. A three-phase fluid of water, gas, and oil has been simulated inside the pipe in two flow regimes, annular and stratified. Different volume percentages from 10 to 80% are considered for each phase. After producing and emitting X-rays from the source and passing through the pipe containing a three-phase fluid, the intensity of photons is recorded by two detectors. The simulation is introduced by a Monte Carlo N-Particle (MCNP) code. After the implementation of all flow regimes in different volume percentages, the signals recorded by the detectors were recorded and labeled. Three frequency characteristics and five wavelet transform characteristics were extracted from the received signals of each detector, which were collected in a total of 16 characteristics from each test. The feature selection system based on the particle swarm optimization (PSO) algorithm was applied to determine the best combination of extracted features. The result was the introduction of seven features as the best features to determine volume percentages. The introduced characteristics were considered as the input of a Multilayer Perceptron (MLP) neural network, whose structure had seven input neurons (selected characteristics) and two output neurons (volume percentage of gas and water). The highest error obtained in determining volume percentages was equal to 0.13 as MSE, a low error compared with previous works. Using the PSO algorithm to select the most optimal features, the current research’s accuracy in determining volume percentages has significantly increased.
Collapse
|
6
|
Naz Z, Khan MUG, Saba T, Rehman A, Nobanee H, Bahaj SA. An Explainable AI-Enabled Framework for Interpreting Pulmonary Diseases from Chest Radiographs. Cancers (Basel) 2023; 15:cancers15010314. [PMID: 36612309 PMCID: PMC9818469 DOI: 10.3390/cancers15010314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/22/2022] [Accepted: 12/23/2022] [Indexed: 01/05/2023] Open
Abstract
Explainable Artificial Intelligence is a key component of artificially intelligent systems that aim to explain the classification results. The classification results explanation is essential for automatic disease diagnosis in healthcare. The human respiration system is badly affected by different chest pulmonary diseases. Automatic classification and explanation can be used to detect these lung diseases. In this paper, we introduced a CNN-based transfer learning-based approach for automatically explaining pulmonary diseases, i.e., edema, tuberculosis, nodules, and pneumonia from chest radiographs. Among these pulmonary diseases, pneumonia, which COVID-19 causes, is deadly; therefore, radiographs of COVID-19 are used for the explanation task. We used the ResNet50 neural network and trained the network on extensive training with the COVID-CT dataset and the COVIDNet dataset. The interpretable model LIME is used for the explanation of classification results. Lime highlights the input image's important features for generating the classification result. We evaluated the explanation using radiologists' highlighted images and identified that our model highlights and explains the same regions. We achieved improved classification results with our fine-tuned model with an accuracy of 93% and 97%, respectively. The analysis of our results indicates that this research not only improves the classification results but also provides an explanation of pulmonary diseases with advanced deep-learning methods. This research would assist radiologists with automatic disease detection and explanations, which are used to make clinical decisions and assist in diagnosing and treating pulmonary diseases in the early stage.
Collapse
Affiliation(s)
- Zubaira Naz
- Department of Computer Science, University of Engineering and Technology Lahore, Lahore 54890, Pakistan
| | - Muhammad Usman Ghani Khan
- Department of Computer Science, University of Engineering and Technology Lahore, Lahore 54890, Pakistan
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia
- Correspondence: (A.R.); (H.N.)
| | - Haitham Nobanee
- College of Business, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
- Oxford Center for Islamic Studies, University of Oxford, Oxford OX3 0EE, UK
- Faculty of Humanities & Social Sciences, University of Liverpool, Liverpool L69 7WZ, UK
- Correspondence: (A.R.); (H.N.)
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj 11942, Saudi Arabia
| |
Collapse
|
7
|
Identification of Anomalies in Mammograms through Internet of Medical Things (IoMT) Diagnosis System. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1100775. [PMID: 36188701 PMCID: PMC9522488 DOI: 10.1155/2022/1100775] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/16/2022] [Accepted: 08/26/2022] [Indexed: 11/17/2022]
Abstract
Breast cancer is the primary health issue that women may face at some point in their lifetime. This may lead to death in severe cases. A mammography procedure is used for finding suspicious masses in the breast. Teleradiology is employed for online treatment and diagnostics processes due to the unavailability and shortage of trained radiologists in backward and remote areas. The availability of online radiologists is uncertain due to inadequate network coverage in rural areas. In such circumstances, the Computer-Aided Diagnosis (CAD) framework is useful for identifying breast abnormalities without expert radiologists. This research presents a decision-making system based on IoMT (Internet of Medical Things) to identify breast anomalies. The proposed technique encompasses the region growing algorithm to segment tumor that extracts suspicious part. Then, texture and shape-based features are employed to characterize breast lesions. The extracted features include first and second-order statistics, center-symmetric local binary pattern (CS-LBP), a histogram of oriented gradients (HOG), and shape-based techniques used to obtain various features from the mammograms. Finally, a fusion of machine learning algorithms including K-Nearest Neighbor (KNN), Support Vector Machine (SVM), and Linear Discriminant Analysis (LDA are employed to classify breast cancer using composite feature vectors. The experimental results exhibit the proposed framework's efficacy that separates the cancerous lesions from the benign ones using 10-fold cross-validations. The accuracy, sensitivity, and specificity attained are 96.3%, 94.1%, and 98.2%, respectively, through shape-based features from the MIAS database. Finally, this research contributes a model with the ability for earlier and improved accuracy of breast tumor detection.
Collapse
|
8
|
Cloud Computing-Based Framework for Breast Tumor Image Classification Using Fusion of AlexNet and GLCM Texture Features with Ensemble Multi-Kernel Support Vector Machine (MK-SVM). COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7403302. [PMID: 36093488 PMCID: PMC9452941 DOI: 10.1155/2022/7403302] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 07/02/2022] [Accepted: 07/29/2022] [Indexed: 01/07/2023]
Abstract
Breast cancer is common among women all over the world. Early identification of breast cancer lowers death rates. However, it is difficult to determine whether these are cancerous or noncancerous lesions due to their inconsistencies in image appearance. Machine learning techniques are widely employed in imaging analysis as a diagnostic method for breast cancer classification. However, patients cannot take advantage of remote areas as these systems are unavailable on clouds. Thus, breast cancer detection for remote patients is indispensable, which can only be possible through cloud computing. The user is allowed to feed images into the cloud system, which is further investigated through the computer aided diagnosis (CAD) system. Such systems could also be used to track patients, older adults, especially with disabilities, particularly in remote areas of developing countries that do not have medical facilities and paramedic staff. In the proposed CAD system, a fusion of AlexNet architecture and GLCM (gray-level cooccurrence matrix) features are used to extract distinguishable texture features from breast tissues. Finally, to attain higher precision, an ensemble of MK-SVM is used. For testing purposes, the proposed model is applied to the MIAS dataset, a commonly used breast image database, and achieved 96.26% accuracy.
Collapse
|
9
|
Rasheed J, Shubair RM. Screening Lung Diseases Using Cascaded Feature Generation and Selection Strategies. Healthcare (Basel) 2022; 10:healthcare10071313. [PMID: 35885839 PMCID: PMC9317294 DOI: 10.3390/healthcare10071313] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 07/13/2022] [Accepted: 07/13/2022] [Indexed: 12/15/2022] Open
Abstract
The global pandemic COVID-19 is still a cause of a health emergency in several parts of the world. Apart from standard testing techniques to identify positive cases, auxiliary tools based on artificial intelligence can help with the identification and containment of the disease. The need for the development of alternative smart diagnostic tools to combat the COVID-19 pandemic has become more urgent. In this study, a smart auxiliary framework based on machine learning (ML) is proposed; it can help medical practitioners in the identification of COVID-19-affected patients, among others with pneumonia and healthy individuals, and can help in monitoring the status of COVID-19 cases using X-ray images. We investigated the application of transfer-learning (TL) networks and various feature-selection techniques for improving the classification accuracy of ML classifiers. Three different TL networks were tested to generate relevant features from images; these TL networks include AlexNet, ResNet101, and SqueezeNet. The generated relevant features were further refined by applying feature-selection methods that include iterative neighborhood component analysis (iNCA), iterative chi-square (iChi2), and iterative maximum relevance–minimum redundancy (iMRMR). Finally, classification was performed using convolutional neural network (CNN), linear discriminant analysis (LDA), and support vector machine (SVM) classifiers. Moreover, the study exploited stationary wavelet (SW) transform to handle the overfitting problem by decomposing each image in the training set up to three levels. Furthermore, it enhanced the dataset, using various operations as data-augmentation techniques, including random rotation, translation, and shear operations. The analysis revealed that the combination of AlexNet, ResNet101, SqueezeNet, iChi2, and SVM was very effective in the classification of X-ray images, producing a classification accuracy of 99.2%. Similarly, AlexNet, ResNet101, and SqueezeNet, along with iChi2 and the proposed CNN network, yielded 99.0% accuracy. The results showed that the cascaded feature generator and selection strategies significantly affected the performance accuracy of the classifier.
Collapse
Affiliation(s)
- Jawad Rasheed
- Department of Software Engineering, Nisantasi University, Istanbul 34398, Turkey
- Correspondence:
| | - Raed M. Shubair
- Department of Electrical and Computer Engineering, New York University (NYU), Abu Dhabi 129188, United Arab Emirates;
| |
Collapse
|
10
|
Mapping Roofing with Asbestos-Containing Material by Using Remote Sensing Imagery and Machine Learning-Based Image Classification: A State-of-the-Art Review. SUSTAINABILITY 2022. [DOI: 10.3390/su14138068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Building roofing produced with asbestos-containing materials is a significant concern due to its detrimental health hazard implications. Efficiently locating asbestos roofing is essential to proactively mitigate and manage potential health risks from this legacy building material. Several studies utilised remote sensing imagery and machine learning-based image classification methods for mapping roofs with asbestos-containing materials. However, there has not yet been a critical review of classification methods conducted in order to provide coherent guidance on the use of different remote sensing images and classification processes. This paper critically reviews the latest works on mapping asbestos roofs to identify the challenges and discuss possible solutions for improving the mapping process. A peer review of studies addressing asbestos roof mapping published from 2012 to 2022 was conducted to synthesise and evaluate the input imagery types and classification methods. Then, the significant challenges in the mapping process were identified, and possible solutions were suggested to address the identified challenges. The results showed that hyperspectral imagery classification with traditional pixel-based classifiers caused large omission errors. Classifying very-high-resolution multispectral imagery by adopting object-based methods improved the accuracy results of ACM roof identification; however, non-optimal segmentation parameters, inadequate training data in supervised methods, and analyst subjectivity in rule-based classifications were reported as significant challenges. While only one study investigated convolutional neural networks for asbestos roof mapping, other applications of remote sensing demonstrated promising results using deep-learning-based models. This paper suggests further studies on utilising Mask R-CNN segmentation and 3D-CNN classification in the conventional approaches and developing end-to-end deep semantic classification models to map roofs with asbestos-containing materials.
Collapse
|
11
|
Facial Emotion Recognition Using Conventional Machine Learning and Deep Learning Methods: Current Achievements, Analysis and Remaining Challenges. INFORMATION 2022. [DOI: 10.3390/info13060268] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Facial emotion recognition (FER) is an emerging and significant research area in the pattern recognition domain. In daily life, the role of non-verbal communication is significant, and in overall communication, its involvement is around 55% to 93%. Facial emotion analysis is efficiently used in surveillance videos, expression analysis, gesture recognition, smart homes, computer games, depression treatment, patient monitoring, anxiety, detecting lies, psychoanalysis, paralinguistic communication, detecting operator fatigue and robotics. In this paper, we present a detailed review on FER. The literature is collected from different reputable research published during the current decade. This review is based on conventional machine learning (ML) and various deep learning (DL) approaches. Further, different FER datasets for evaluation metrics that are publicly available are discussed and compared with benchmark results. This paper provides a holistic review of FER using traditional ML and DL methods to highlight the future gap in this domain for new researchers. Finally, this review work is a guidebook and very helpful for young researchers in the FER area, providing a general understating and basic knowledge of the current state-of-the-art methods, and to experienced researchers looking for productive directions for future work.
Collapse
|