1
|
Pancholi S, Wachs JP, Duerstock BS. Use of Artificial Intelligence Techniques to Assist Individuals with Physical Disabilities. Annu Rev Biomed Eng 2024; 26:1-24. [PMID: 37832939 DOI: 10.1146/annurev-bioeng-082222-012531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
Assistive technologies (AT) enable people with disabilities to perform activities of daily living more independently, have greater access to community and healthcare services, and be more productive performing educational and/or employment tasks. Integrating artificial intelligence (AI) with various agents, including electronics, robotics, and software, has revolutionized AT, resulting in groundbreaking technologies such as mind-controlled exoskeletons, bionic limbs, intelligent wheelchairs, and smart home assistants. This article provides a review of various AI techniques that have helped those with physical disabilities, including brain-computer interfaces, computer vision, natural language processing, and human-computer interaction. The current challenges and future directions for AI-powered advanced technologies are also addressed.
Collapse
Affiliation(s)
- Sidharth Pancholi
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana, USA;
| | - Juan P Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| | - Bradley S Duerstock
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana, USA;
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
2
|
Selvaraj V, Alagarsamy M, Datchanamoorthy K, Manickam G. Band power feature part-based convolutional neural network with African vulture optimization fostered channel selection for EEG classification. Comput Methods Biomech Biomed Engin 2024:1-14. [PMID: 38907638 DOI: 10.1080/10255842.2024.2356633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 05/10/2024] [Indexed: 06/24/2024]
Abstract
The electroencephalogram-based motor imagery (MI-EEG) classification task is significant for brain-computer interface (BCI). EEG signals need a lot of channels to be acquired, which makes it difficult to use in real-world applications. Choosing the optimal channel subset without severely impacting the classification performance is a problem in the field of BCI. To overwhelm this problem, a band power feature part-based convolutional neural network with African vulture optimization fostered channel selection for EEG classification (PCNNC-AVOACS-EEG) is proposed in this article. Initially, the input EEG signals are taken from BCI competition IV, dataset 1. Then the input EEG signals are pre-processed by contrast-limited adaptive histogram equalization filtering. These pre-processed EEG signals are extracted by hexadecimal local adaptive binary pattern (HLABP) method. This HLABP method extracts the features of alpha and beta bands from the EEG segments. Each EEG channel's band power data are utilized as features for a PCNNC to exactly classify the EEG into 3 classes: two MI states and idle state. The AVOA is applied within the band power feature PCNNC for channel selection, wherein channel selection aids to enhance the categorization accuracy on test set that is a vital indicator for real-time BCI applications. The proposed method is activated in python. From the experiment, the proposed technique attains 17.91%, 20.46% and 18.146% higher accuracy; 14.105%, 15.295% and 5.291% higher area under the curve and 70%, 60% and 65.714% lower computation time compared with the existing approaches.
Collapse
Affiliation(s)
- Vairaprakash Selvaraj
- Department of Electronics and Communication Engineering, Ramco Institute of Technology, Rajapalayam, Tamil Nadu, India
| | - Manjunathan Alagarsamy
- Department of Electronics and Communication Engineering, K. Ramakrishnan College of Technology, Trichy, Tamil Nadu, India
| | - Kavitha Datchanamoorthy
- Department of Computer Science and Engineering, Easwari Engineering College, Chennai, Tamil Nadu, India
| | - Geethalakshmi Manickam
- Department of Electronics and Communication Engineering, Kongunadu College of Engineering and Technology, Trichy, Tamil Nadu, India
| |
Collapse
|
3
|
Tang Z, Wang H, Cui Z, Jin X, Zhang L, Peng Y, Xing B. An Upper-Limb Rehabilitation Exoskeleton System Controlled by MI Recognition Model With Deep Emphasized Informative Features in a VR Scene. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4390-4401. [PMID: 37910412 DOI: 10.1109/tnsre.2023.3329059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
The prevalence of stroke continues to increase with the global aging. Based on the motor imagery (MI) brain-computer interface (BCI) paradigm and virtual reality (VR) technology, we designed and developed an upper-limb rehabilitation exoskeleton system (VR-ULE) in the VR scenes for stroke patients. The VR-ULE system makes use of the MI electroencephalogram (EEG) recognition model with a convolutional neural network and squeeze-and-excitation (SE) blocks to obtain the patient's motion intentions and control the exoskeleton to move during rehabilitation training movement. Due to the individual differences in EEG, the frequency bands with optimal MI EEG features for each patient are different. Therefore, the weight of different feature channels is learned by combining SE blocks to emphasize the useful information frequency band features. The MI cues in the VR-based virtual scenes can improve the interhemispheric balance and the neuroplasticity of patients. It also makes up for the disadvantages of the current MI-BCIs, such as single usage scenarios, poor individual adaptability, and many interfering factors. We designed the offline training experiment to evaluate the feasibility of the EEG recognition strategy, and designed the online control experiment to verify the effectiveness of the VR-ULE system. The results showed that the MI classification method with MI cues in the VR scenes improved the accuracy of MI classification (86.49% ± 3.02%); all subjects performed two types of rehabilitation training tasks under their own models trained in the offline training experiment, with the highest average completion rates of 86.82% ± 4.66% and 88.48% ± 5.84%. The VR-ULE system can efficiently help stroke patients with hemiplegia complete upper-limb rehabilitation training tasks, and provide the new methods and strategies for BCI-based rehabilitation devices.
Collapse
|
4
|
Dong Y, Tang X, Li Q, Wang Y, Jiang N, Tian L, Zheng Y, Li X, Zhao S, Li G, Fang P. An Approach for EEG Denoising Based on Wasserstein Generative Adversarial Network. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3524-3534. [PMID: 37643110 DOI: 10.1109/tnsre.2023.3309815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Electroencephalogram (EEG) recordings often contain artifacts that would lower signal quality. Many efforts have been made to eliminate or at least minimize the artifacts, and most of them rely on visual inspection and manual operations, which is time/labor-consuming, subjective, and incompatible to filter massive EEG data in real-time. In this paper, we proposed a deep learning framework named Artifact Removal Wasserstein Generative Adversarial Network (AR-WGAN), where the well-trained model can decompose input EEG, detect and delete artifacts, and then reconstruct denoised signals within a short time. The proposed approach was systematically compared with commonly used denoising methods including Denoised AutoEncoder, Wiener Filter, and Empirical Mode Decomposition, with both public and self-collected datasets. The experimental results proved the promising performance of AR-WGAN on automatic artifact removal for massive data across subjects, with correlation coefficient up to 0.726±0.033, and temporal and spatial relative root-mean-square error as low as 0.176±0.046 and 0.761±0.046, respectively. This work may demonstrate the proposed AR-WGAN as a high-performance end-to-end method for EEG denoising, with many on-line applications in clinical EEG monitoring and brain-computer interfaces.
Collapse
|
5
|
Tang X, Zhang W, Wang H, Wang T, Tan C, Zou M, Xu Z. Dynamic pruning group equivariant network for motor imagery EEG recognition. Front Bioeng Biotechnol 2023; 11:917328. [PMID: 37324415 PMCID: PMC10267707 DOI: 10.3389/fbioe.2023.917328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 04/26/2023] [Indexed: 06/17/2023] Open
Abstract
Introduction: The decoding of the motor imaging electroencephalogram (MI-EEG) is the most critical part of the brain-computer interface (BCI) system. However, the inherent complexity of EEG signals makes it challenging to analyze and model them. Methods: In order to effectively extract and classify the features of EEG signals, a classification algorithm of motor imagery EEG signals based on dynamic pruning equal-variant group convolutional network is proposed. Group convolutional networks can learn powerful representations based on symmetric patterns, but they lack clear methods to learn meaningful relationships between them. The dynamic pruning equivariant group convolution proposed in this paper is used to enhance meaningful symmetric combinations and suppress unreasonable and misleading symmetric combinations. At the same time, a new dynamic pruning method is proposed to dynamically evaluate the importance of parameters, which can restore the pruned connections. Results and Discussion: The experimental results show that the pruning group equivariant convolution network is superior to the traditional benchmark method in the benchmark motor imagery EEG data set. This research can also be transferred to other research areas.
Collapse
Affiliation(s)
- Xianlun Tang
- Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Wei Zhang
- Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Huiming Wang
- Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Tianzhu Wang
- Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Cong Tan
- Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Mi Zou
- Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Zihui Xu
- Xinqiao Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
6
|
Svantesson M, Olausson H, Eklund A, Thordstein M. Get a New Perspective on EEG: Convolutional Neural Network Encoders for Parametric t-SNE. Brain Sci 2023; 13:brainsci13030453. [PMID: 36979263 PMCID: PMC10046040 DOI: 10.3390/brainsci13030453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/03/2023] [Accepted: 03/04/2023] [Indexed: 03/09/2023] Open
Abstract
t-distributed stochastic neighbor embedding (t-SNE) is a method for reducing high-dimensional data to a low-dimensional representation, and is mostly used for visualizing data. In parametric t-SNE, a neural network learns to reproduce this mapping. When used for EEG analysis, the data are usually first transformed into a set of features, but it is not known which features are optimal. The principle of t-SNE was used to train convolutional neural network (CNN) encoders to learn to produce both a high- and a low-dimensional representation, eliminating the need for feature engineering. To evaluate the method, the Temple University EEG Corpus was used to create three datasets with distinct EEG characters: (1) wakefulness and sleep; (2) interictal epileptiform discharges; and (3) seizure activity. The CNN encoders produced low-dimensional representations of the datasets with a structure that conformed well to the EEG characters and generalized to new data. Compared to parametric t-SNE for either a short-time Fourier transform or wavelet representation of the datasets, the developed CNN encoders performed equally well in separating categories, as assessed by support vector machines. The CNN encoders generally produced a higher degree of clustering, both visually and in the number of clusters detected by k-means clustering. The developed principle is promising and could be further developed to create general tools for exploring relations in EEG data.
Collapse
Affiliation(s)
- Mats Svantesson
- Department of Clinical Neurophysiology, University Hospital of Linköping, 58185 Linköping, Sweden
- Center for Social and Affective Neuroscience, Linköping University, 58183 Linköping, Sweden
- Center for Medical Image Science and Visualization, Linköping University, 58183 Linköping, Sweden
- Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
- Correspondence:
| | - Håkan Olausson
- Department of Clinical Neurophysiology, University Hospital of Linköping, 58185 Linköping, Sweden
- Center for Social and Affective Neuroscience, Linköping University, 58183 Linköping, Sweden
- Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
| | - Anders Eklund
- Center for Medical Image Science and Visualization, Linköping University, 58183 Linköping, Sweden
- Department of Biomedical Engineering, Linköping University, 58183 Linköping, Sweden
- Division of Statistics & Machine Learning, Department of Computer and Information Science, Linköping University, 58183 Linköping, Sweden
| | - Magnus Thordstein
- Department of Clinical Neurophysiology, University Hospital of Linköping, 58185 Linköping, Sweden
- Center for Medical Image Science and Visualization, Linköping University, 58183 Linköping, Sweden
- Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
| |
Collapse
|
7
|
Domínguez-Ruiz A, López-Caudana EO, Lugo-González E, Espinosa-García FJ, Ambrocio-Delgado R, García UD, López-Gutiérrez R, Alfaro-Ponce M, Ponce P. Low limb prostheses and complex human prosthetic interaction: A systematic literature review. Front Robot AI 2023; 10:1032748. [PMID: 36860557 PMCID: PMC9968924 DOI: 10.3389/frobt.2023.1032748] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 01/11/2023] [Indexed: 02/15/2023] Open
Abstract
A few years ago, powered prostheses triggered new technological advances in diverse areas such as mobility, comfort, and design, which have been essential to improving the quality of life of individuals with lower limb disability. The human body is a complex system involving mental and physical health, meaning a dependant relationship between its organs and lifestyle. The elements used in the design of these prostheses are critical and related to lower limb amputation level, user morphology and human-prosthetic interaction. Hence, several technologies have been employed to accomplish the end user's needs, for example, advanced materials, control systems, electronics, energy management, signal processing, and artificial intelligence. This paper presents a systematic literature review on such technologies, to identify the latest advances, challenges, and opportunities in developing lower limb prostheses with the analysis on the most significant papers. Powered prostheses for walking in different terrains were illustrated and examined, with the kind of movement the device should perform by considering the electronics, automatic control, and energy efficiency. Results show a lack of a specific and generalised structure to be followed by new developments, gaps in energy management and improved smoother patient interaction. Additionally, Human Prosthetic Interaction (HPI) is a term introduced in this paper since no other research has integrated this interaction in communication between the artificial limb and the end-user. The main goal of this paper is to provide, with the found evidence, a set of steps and components to be followed by new researchers and experts looking to improve knowledge in this field.
Collapse
Affiliation(s)
- Adan Domínguez-Ruiz
- Institute for the Future of Education, Tecnologico de Monterrey, Mexico City, México
| | | | - Esther Lugo-González
- Instituto de Electrónica y Mecatrónica, Universidad Tecnológica de la Mixteca, Huajuapan de León, Oaxaca, México
| | | | - Rocío Ambrocio-Delgado
- División de Estudios de Posgrado, Universidad Tecnológica de la Mixteca, Huajuapan de León, Oaxaca, México
| | - Ulises D. García
- CONACYT-CINVESTAV, Av. Instituto Politécnico Nacional 2508, col. San Pedro Zacatenco, Ciudad deMéxico, México
| | - Ricardo López-Gutiérrez
- CONACYT-CINVESTAV, Av. Instituto Politécnico Nacional 2508, col. San Pedro Zacatenco, Ciudad deMéxico, México
| | - Mariel Alfaro-Ponce
- Institute of Advanced Materials for Sustainable Manufacturing, Tecnologico de Monterrey, Mexico City, México
| | - Pedro Ponce
- Institute of Advanced Materials for Sustainable Manufacturing, Tecnologico de Monterrey, Mexico City, México,*Correspondence: Pedro Ponce,
| |
Collapse
|
8
|
Yoo J, Yoo I, Youn I, Kim SM, Yu R, Kim K, Kim K, Lee SB. Residual one-dimensional convolutional neural network for neuromuscular disorder classification from needle electromyography signals with explainability. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107079. [PMID: 36191354 DOI: 10.1016/j.cmpb.2022.107079] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 07/25/2022] [Accepted: 08/20/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Neuromuscular disorders are diseases that damage our ability to control body movements. Needle electromyography (nEMG) is often used to diagnose neuromuscular disorders, which is an electrophysiological test measuring electric signals generated from a muscle using an invasive needle. Characteristics of nEMG signals are manually analyzed by an electromyographer to diagnose the types of neuromuscular disorders, and this process is highly dependent on the subjective experience of the electromyographer. Contemporary computer-aided methods utilized deep learning image classification models to classify nEMG signals which are not optimized for classifying signals. Additionally, model explainability was not addressed which is crucial in medical applications. This study aims to improve prediction accuracy, inference time, and explain model predictions in nEMG neuromuscular disorder classification. METHODS This study introduces the nEMGNet, a one-dimensional convolutional neural network with residual connections designed to extract features from raw signals with higher accuracy and faster speed compared to image classification models from previous works. Next, the divide-and-vote (DiVote) algorithm was designed to integrate each subject's heterogeneous nEMG signal data structures and to utilize muscle subtype information for higher accuracy. Finally, feature visualization was used to identify the causality of nEMGNet diagnosis predictions, to ensure that nEMGNet made predictions on valid features, not artifacts. RESULTS The proposed method was tested using 376 nEMG signals measured from 57 subjects between June 2015 to July 2020 in Seoul National University Hospital. The results from the three-class classification task demonstrated that nEMGNet's prediction accuracy of nEMG signal segments was 62.35%, and the subject diagnosis prediction accuracy of nEMGNet and the DiVote algorithm was 83.69 %, over 5-fold cross-validation. nEMGNet outperformed all models from previous works on nEMG diagnosis classification, and heuristic analysis of feature visualization results indicate that nEMGNet learned relevant nEMG signal characteristics. CONCLUSIONS This study introduced nEMGNet and DiVote algorithm which demonstrated fast and accurate performance in predicting neuromuscular disorders based on nEMG signals. The proposed method may be applied in medicine to support real-time electrophysiologic diagnosis.
Collapse
Affiliation(s)
- Jaesung Yoo
- School of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Ilhan Yoo
- Department of Neurology, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul, Republic of Korea
| | - Ina Youn
- Department of Computer Science, New York University, NY, USA
| | - Sung-Min Kim
- Department of Neurology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Ri Yu
- Department of Software and Computer Engineering, Department of Artificial Intelligence, Ajou University
| | - Kwangsoo Kim
- Transdisciplinary Department of Medicine and Advanced Technology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Keewon Kim
- Department of Rehabilitation Medicine, Seoul National University Hospital, Seoul, Republic of Korea.
| | - Seung-Bo Lee
- Department of Medical Informatics: Keimyung University School of Medicine, Daegu, Republic of Korea.
| |
Collapse
|
9
|
Li J, Liang T, Zeng Z, Xu P, Chen Y, Guo Z, Liang Z, Xie L. Motion intention prediction of upper limb in stroke survivors using sEMG signal and attention mechanism. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
10
|
Das A, Mock J, Irani F, Huang Y, Najafirad P, Golob E. Multimodal explainable AI predicts upcoming speech behavior in adults who stutter. Front Neurosci 2022; 16:912798. [PMID: 35979337 PMCID: PMC9376608 DOI: 10.3389/fnins.2022.912798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 07/04/2022] [Indexed: 11/18/2022] Open
Abstract
A key goal of cognitive neuroscience is to better understand how dynamic brain activity relates to behavior. Such dynamics, in terms of spatial and temporal patterns of brain activity, are directly measured with neurophysiological methods such as EEG, but can also be indirectly expressed by the body. Autonomic nervous system activity is the best-known example, but, muscles in the eyes and face can also index brain activity. Mostly parallel lines of artificial intelligence research show that EEG and facial muscles both encode information about emotion, pain, attention, and social interactions, among other topics. In this study, we examined adults who stutter (AWS) to understand the relations between dynamic brain and facial muscle activity and predictions about future behavior (fluent or stuttered speech). AWS can provide insight into brain-behavior dynamics because they naturally fluctuate between episodes of fluent and stuttered speech behavior. We focused on the period when speech preparation occurs, and used EEG and facial muscle activity measured from video to predict whether the upcoming speech would be fluent or stuttered. An explainable self-supervised multimodal architecture learned the temporal dynamics of both EEG and facial muscle movements during speech preparation in AWS, and predicted fluent or stuttered speech at 80.8% accuracy (chance=50%). Specific EEG and facial muscle signals distinguished fluent and stuttered trials, and systematically varied from early to late speech preparation time periods. The self-supervised architecture successfully identified multimodal activity that predicted upcoming behavior on a trial-by-trial basis. This approach could be applied to understanding the neural mechanisms driving variable behavior and symptoms in a wide range of neurological and psychiatric disorders. The combination of direct measures of neural activity and simple video data may be applied to developing technologies that estimate brain state from subtle bodily signals.
Collapse
Affiliation(s)
- Arun Das
- Secure AI and Autonomy Laboratory, University of Texas at San Antonio, San Antonio, TX, United States
- UPMC Hillman Cancer Center, University of Pittsburgh Medical Center, Pittsburgh, PA, United States
| | - Jeffrey Mock
- Cognitive Neuroscience Laboratory, University of Texas at San Antonio, San Antonio, TX, United States
| | - Farzan Irani
- Department of Communication Disorders, Texas State University, San Marcos, TX, United States
| | - Yufei Huang
- UPMC Hillman Cancer Center, University of Pittsburgh Medical Center, Pittsburgh, PA, United States
| | - Peyman Najafirad
- Secure AI and Autonomy Laboratory, University of Texas at San Antonio, San Antonio, TX, United States
| | - Edward Golob
- Cognitive Neuroscience Laboratory, University of Texas at San Antonio, San Antonio, TX, United States
| |
Collapse
|
11
|
Rostami M, Oussalah M. A novel explainable COVID-19 diagnosis method by integration of feature selection with random forest. INFORMATICS IN MEDICINE UNLOCKED 2022; 30:100941. [PMID: 35399333 PMCID: PMC8985417 DOI: 10.1016/j.imu.2022.100941] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 04/01/2022] [Accepted: 04/01/2022] [Indexed: 12/12/2022] Open
Abstract
Several Artificial Intelligence-based models have been developed for COVID-19 disease diagnosis. In spite of the promise of artificial intelligence, there are very few models which bridge the gap between traditional human-centered diagnosis and the potential future of machine-centered disease diagnosis. Under the concept of human-computer interaction design, this study proposes a new explainable artificial intelligence method that exploits graph analysis for feature visualization and optimization for the purpose of COVID-19 diagnosis from blood test samples. In this developed model, an explainable decision forest classifier is employed to COVID-19 classification based on routinely available patient blood test data. The approach enables the clinician to use the decision tree and feature visualization to guide the explainability and interpretability of the prediction model. By utilizing this novel feature selection phase, the proposed diagnosis model will not only improve diagnosis accuracy but decrease the execution time as well.
Collapse
Affiliation(s)
- Mehrdad Rostami
- Centre for Machine Vision and Signal Processing, Faculty of Information Technology, University of Oulu, Oulu, Finland
| | - Mourad Oussalah
- Centre for Machine Vision and Signal Processing, Faculty of Information Technology, University of Oulu, Oulu, Finland
- Research Unit of Medical Imaging, Physics, and Technology, Faculty of Medicine, University of Oulu, Finland
| |
Collapse
|
12
|
Samuel OW, Asogbon MG, Ejay E, Geng Y, Lopez-Delis A, Jarrah YA, Idowu OP, Chen S, Fang P, Li G. A Low-rank Spatiotemporal based EEG Multi-Artifacts Cancellation Method for Enhanced ConvNet-DL's Motor Imagery Characterization. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:791-794. [PMID: 34891409 DOI: 10.1109/embc46164.2021.9629547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Multi-channel Electroencephalograph (EEG) signal is an important source of neural information for motor imagery (MI) limb movement intent decoding. The decoded MI movement intent often serve as potential control input for brain-computer interface (BCI) based rehabilitation robots. However, the presence of multiple dynamic artifacts in EEG signal leads to serious processing challenge that affects the BCI system in practical settings. Hence, this study propose a hybrid approach based on Low-rank spatiotemporal filtering technique for concurrent elimination of multiple EEG artifacts. Afterwards, a convolutional neural network based deep learning model (ConvNet-DL) that extracts neural information from the cleaned EEG signal for MI tasks decoding was built. The proposed method was studied in comparison with existing artifact removal methods using EEG signals of transhumeral amputees who performed five different MI tasks. Remarkably, the proposed method led to significant improvements in MI task decoding accuracy for the ConvNet-DL model in the range of 8.00~13.98%, while up to 14.38% increment was recorded in terms of the MCC: Mathew correlation coefficients at p<0.05. Also, a signal to error ratio of more than 11 dB was recorded by the proposed method.Clinical Relevance- This study showed that a combination of the proposed hybrid EEG artifact removal method and ConvNet-DL can significantly improve the decoding accuracy of MI upper limb movement tasks. Our findings may provide potential control input for BCI rehabilitation robotic systems.
Collapse
|