1
|
Pancholi S, Wachs JP, Duerstock BS. Use of Artificial Intelligence Techniques to Assist Individuals with Physical Disabilities. Annu Rev Biomed Eng 2024; 26:1-24. [PMID: 37832939 DOI: 10.1146/annurev-bioeng-082222-012531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
Assistive technologies (AT) enable people with disabilities to perform activities of daily living more independently, have greater access to community and healthcare services, and be more productive performing educational and/or employment tasks. Integrating artificial intelligence (AI) with various agents, including electronics, robotics, and software, has revolutionized AT, resulting in groundbreaking technologies such as mind-controlled exoskeletons, bionic limbs, intelligent wheelchairs, and smart home assistants. This article provides a review of various AI techniques that have helped those with physical disabilities, including brain-computer interfaces, computer vision, natural language processing, and human-computer interaction. The current challenges and future directions for AI-powered advanced technologies are also addressed.
Collapse
Affiliation(s)
- Sidharth Pancholi
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana, USA;
| | - Juan P Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| | - Bradley S Duerstock
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana, USA;
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
2
|
Blanco-Díaz CF, Guerrero-Mendez CD, Delisle-Rodriguez D, Jaramillo-Isaza S, Ruiz-Olaya AF, Frizera-Neto A, Ferreira de Souza A, Bastos-Filho T. Evaluation of temporal, spatial and spectral filtering in CSP-based methods for decoding pedaling-based motor tasks using EEG signals. Biomed Phys Eng Express 2024; 10:035003. [PMID: 38417162 DOI: 10.1088/2057-1976/ad2e35] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 02/28/2024] [Indexed: 03/01/2024]
Abstract
Stroke is a neurological syndrome that usually causes a loss of voluntary control of lower/upper body movements, making it difficult for affected individuals to perform Activities of Daily Living (ADLs). Brain-Computer Interfaces (BCIs) combined with robotic systems, such as Motorized Mini Exercise Bikes (MMEB), have enabled the rehabilitation of people with disabilities by decoding their actions and executing a motor task. However, Electroencephalography (EEG)-based BCIs are affected by the presence of physiological and non-physiological artifacts. Thus, movement discrimination using EEG become challenging, even in pedaling tasks, which have not been well explored in the literature. In this study, Common Spatial Patterns (CSP)-based methods were proposed to classify pedaling motor tasks. To address this, Filter Bank Common Spatial Patterns (FBCSP) and Filter Bank Common Spatial-Spectral Patterns (FBCSSP) were implemented with different spatial filtering configurations by varying the time segment with different filter bank combinations for the three methods to decode pedaling tasks. An in-house EEG dataset during pedaling tasks was registered for 8 participants. As results, the best configuration corresponds to a filter bank with two filters (8-19 Hz and 19-30 Hz) using a time window between 1.5 and 2.5 s after the cue and implementing two spatial filters, which provide accuracy of approximately 0.81, False Positive Rates lower than 0.19, andKappaindex of 0.61. This work implies that EEG oscillatory patterns during pedaling can be accurately classified using machine learning. Therefore, our method can be applied in the rehabilitation context, such as MMEB-based BCIs, in the future.
Collapse
Affiliation(s)
- Cristian Felipe Blanco-Díaz
- Postgraduate Program in Electrical Engineering, Federal University of Espirito Santo (UFES), 29075-910 Vitória, Brazil
- Faculty of Mechanical, Electronics and Biomedical Engineering, Antonio Nariño University, Bogotá D.C, Colombia
| | - Cristian David Guerrero-Mendez
- Postgraduate Program in Electrical Engineering, Federal University of Espirito Santo (UFES), 29075-910 Vitória, Brazil
- Faculty of Mechanical, Electronics and Biomedical Engineering, Antonio Nariño University, Bogotá D.C, Colombia
| | | | | | - Andrés Felipe Ruiz-Olaya
- Faculty of Mechanical, Electronics and Biomedical Engineering, Antonio Nariño University, Bogotá D.C, Colombia
| | - Anselmo Frizera-Neto
- Postgraduate Program in Electrical Engineering, Federal University of Espirito Santo (UFES), 29075-910 Vitória, Brazil
| | | | - Teodiano Bastos-Filho
- Postgraduate Program in Electrical Engineering, Federal University of Espirito Santo (UFES), 29075-910 Vitória, Brazil
| |
Collapse
|
3
|
Lee J, Miri S, Bayro A, Kim M, Jeong H, Yeo WH. Biosignal-integrated robotic systems with emerging trends in visual interfaces: A systematic review. BIOPHYSICS REVIEWS 2024; 5:011301. [PMID: 38510371 PMCID: PMC10903439 DOI: 10.1063/5.0185568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/29/2024] [Indexed: 03/22/2024]
Abstract
Human-machine interfaces (HMI) are currently a trendy and rapidly expanding area of research. Interestingly, the human user does not readily observe the interface between humans and machines. Instead, interactions between the machine and electrical signals from the user's body are obscured by complex control algorithms. The result is effectively a one-way street, wherein data is only transmitted from human to machine. Thus, a gap remains in the literature: how can information be effectively conveyed to the user to enable mutual understanding between humans and machines? Here, this paper reviews recent advancements in biosignal-integrated wearable robotics, with a particular emphasis on "visualization"-the presentation of relevant data, statistics, and visual feedback to the user. This review article covers various signals of interest, such as electroencephalograms and electromyograms, and explores novel sensor architectures and key materials. Recent developments in wearable robotics are examined from control and mechanical design perspectives. Additionally, we discuss current visualization methods and outline the field's future direction. While much of the HMI field focuses on biomedical and healthcare applications, such as rehabilitation of spinal cord injury and stroke patients, this paper also covers less common applications in manufacturing, defense, and other domains.
Collapse
Affiliation(s)
| | - Sina Miri
- Department of Mechanical and Industrial Engineering, The University of Illinois at Chicago, Chicago, Illinois 60607, USA
| | - Allison Bayro
- School of Biological and Health Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University, Tempe, Arizona 85287, USA
| | - Myunghee Kim
- Department of Mechanical and Industrial Engineering, The University of Illinois at Chicago, Chicago, Illinois 60607, USA
| | - Heejin Jeong
- Authors to whom correspondence should be addressed:; ; and
| | - Woon-Hong Yeo
- Authors to whom correspondence should be addressed:; ; and
| |
Collapse
|
4
|
Park J, Lee Y, Cho S, Choe A, Yeom J, Ro YG, Kim J, Kang DH, Lee S, Ko H. Soft Sensors and Actuators for Wearable Human-Machine Interfaces. Chem Rev 2024; 124:1464-1534. [PMID: 38314694 DOI: 10.1021/acs.chemrev.3c00356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Haptic human-machine interfaces (HHMIs) combine tactile sensation and haptic feedback to allow humans to interact closely with machines and robots, providing immersive experiences and convenient lifestyles. Significant progress has been made in developing wearable sensors that accurately detect physical and electrophysiological stimuli with improved softness, functionality, reliability, and selectivity. In addition, soft actuating systems have been developed to provide high-quality haptic feedback by precisely controlling force, displacement, frequency, and spatial resolution. In this Review, we discuss the latest technological advances of soft sensors and actuators for the demonstration of wearable HHMIs. We particularly focus on highlighting material and structural approaches that enable desired sensing and feedback properties necessary for effective wearable HHMIs. Furthermore, promising practical applications of current HHMI technology in various areas such as the metaverse, robotics, and user-interactive devices are discussed in detail. Finally, this Review further concludes by discussing the outlook for next-generation HHMI technology.
Collapse
Affiliation(s)
- Jonghwa Park
- School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City 44919, Republic of Korea
| | - Youngoh Lee
- School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City 44919, Republic of Korea
| | - Seungse Cho
- School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City 44919, Republic of Korea
| | - Ayoung Choe
- School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City 44919, Republic of Korea
| | - Jeonghee Yeom
- School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City 44919, Republic of Korea
| | - Yun Goo Ro
- School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City 44919, Republic of Korea
| | - Jinyoung Kim
- School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City 44919, Republic of Korea
| | - Dong-Hee Kang
- School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City 44919, Republic of Korea
| | - Seungjae Lee
- School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City 44919, Republic of Korea
| | - Hyunhyub Ko
- School of Energy and Chemical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan Metropolitan City 44919, Republic of Korea
| |
Collapse
|
5
|
Tian Y, Vaskov AK, Adidharma W, Cederna PS, Kemp SW. Merging Humans and Neuroprosthetics through Regenerative Peripheral Nerve Interfaces. Semin Plast Surg 2024; 38:10-18. [PMID: 38495064 PMCID: PMC10942838 DOI: 10.1055/s-0044-1779028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Limb amputations can be devastating and significantly affect an individual's independence, leading to functional and psychosocial challenges in nearly 2 million people in the United States alone. Over the past decade, robotic devices driven by neural signals such as neuroprostheses have shown great potential to restore the lost function of limbs, allowing amputees to regain movement and sensation. However, current neuroprosthetic interfaces have challenges in both signal quality and long-term stability. To overcome these limitations and work toward creating bionic limbs, the Neuromuscular Laboratory at University of Michigan Plastic Surgery has developed the Regenerative Peripheral Nerve Interface (RPNI). This surgical construct embeds a transected peripheral nerve into a free muscle graft, effectively amplifying small peripheral nerve signals to provide enhanced control signals for a neuroprosthetic limb. Furthermore, the RPNI has the potential to provide sensory feedback to the user and facilitate neuroprosthesis embodiment. This review focuses on the animal studies and clinical trials of the RPNI to recapitulate the promising trajectory toward neurobionics where the boundary between an artificial device and the human body becomes indistinct. This paper also sheds light on the prospects of the improvement and dissemination of the RPNI technology.
Collapse
Affiliation(s)
- Yucheng Tian
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan
| | - Alex K. Vaskov
- Section of Plastic Surgery, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Widya Adidharma
- Section of Plastic Surgery, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Paul S. Cederna
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan
- Section of Plastic Surgery, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Stephen W.P. Kemp
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan
- Section of Plastic Surgery, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| |
Collapse
|
6
|
Rakhmatulin I, Dao MS, Nassibi A, Mandic D. Exploring Convolutional Neural Network Architectures for EEG Feature Extraction. SENSORS (BASEL, SWITZERLAND) 2024; 24:877. [PMID: 38339594 PMCID: PMC10856895 DOI: 10.3390/s24030877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/12/2024] [Accepted: 01/20/2024] [Indexed: 02/12/2024]
Abstract
The main purpose of this paper is to provide information on how to create a convolutional neural network (CNN) for extracting features from EEG signals. Our task was to understand the primary aspects of creating and fine-tuning CNNs for various application scenarios. We considered the characteristics of EEG signals, coupled with an exploration of various signal processing and data preparation techniques. These techniques include noise reduction, filtering, encoding, decoding, and dimension reduction, among others. In addition, we conduct an in-depth analysis of well-known CNN architectures, categorizing them into four distinct groups: standard implementation, recurrent convolutional, decoder architecture, and combined architecture. This paper further offers a comprehensive evaluation of these architectures, covering accuracy metrics, hyperparameters, and an appendix that contains a table outlining the parameters of commonly used CNN architectures for feature extraction from EEG signals.
Collapse
Affiliation(s)
- Ildar Rakhmatulin
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK; (A.N.)
| | - Minh-Son Dao
- National Institute of Information and Communications Technology (NICT), Tokyo 184-0015, Japan
| | - Amir Nassibi
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK; (A.N.)
| | - Danilo Mandic
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK; (A.N.)
| |
Collapse
|
7
|
Miao M, Yang Z, Zeng H, Zhang W, Xu B, Hu W. Explainable cross-task adaptive transfer learning for motor imagery EEG classification. J Neural Eng 2023; 20:066021. [PMID: 37963394 DOI: 10.1088/1741-2552/ad0c61] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 11/14/2023] [Indexed: 11/16/2023]
Abstract
Objective. In the field of motor imagery (MI) electroencephalography (EEG)-based brain-computer interfaces, deep transfer learning (TL) has proven to be an effective tool for solving the problem of limited availability in subject-specific data for the training of robust deep learning (DL) models. Although considerable progress has been made in the cross-subject/session and cross-device scenarios, the more challenging problem of cross-task deep TL remains largely unexplored.Approach. We propose a novel explainable cross-task adaptive TL method for MI EEG decoding. Firstly, similarity analysis and data alignment are performed for EEG data of motor execution (ME) and MI tasks. Afterwards, the MI EEG decoding model is obtained via pre-training with extensive ME EEG data and fine-tuning with partial MI EEG data. Finally, expected gradient-based post-hoc explainability analysis is conducted for the visualization of important temporal-spatial features.Main results. Extensive experiments are conducted on one large ME EEG High-Gamma dataset and two large MI EEG datasets (openBMI and GIST). The best average classification accuracy of our method reaches 80.00% and 72.73% for OpenBMI and GIST respectively, which outperforms several state-of-the-art algorithms. In addition, the results of the explainability analysis further validate the correlation between ME and MI EEG data and the effectiveness of ME/MI cross-task adaptation.Significance. This paper confirms that the decoding of MI EEG can be well facilitated by pre-existing ME EEG data, which largely relaxes the constraint of training samples for MI EEG decoding and is important in a practical sense.
Collapse
Affiliation(s)
- Minmin Miao
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, People's Republic of China
| | - Zhong Yang
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
| | - Hong Zeng
- School of Instrument Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Wenbin Zhang
- College of Computer and Information, Hohai University, Nanjing, People's Republic of China
| | - Baoguo Xu
- School of Instrument Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Wenjun Hu
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, People's Republic of China
| |
Collapse
|
8
|
Hooks K, El-Said R, Fu Q. Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limb. Front Hum Neurosci 2023; 17:1302647. [PMID: 38021246 PMCID: PMC10663285 DOI: 10.3389/fnhum.2023.1302647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 10/25/2023] [Indexed: 12/01/2023] Open
Abstract
Fundamental to human movement is the ability to interact with objects in our environment. How one reaches an object depends on the object's shape and intended interaction afforded by the object, e.g., grasp and transport. Extensive research has revealed that the motor intention of reach-to-grasp can be decoded from cortical activities using EEG signals. The goal of the present study is to determine the extent to which information encoded in the EEG signals is shared between two limbs to enable cross-hand decoding. We performed an experiment in which human subjects (n = 10) were tasked to interact with a novel object with multiple affordances using either right or left hands. The object had two vertical handles attached to a horizontal base. A visual cue instructs what action (lift or touch) and whether the left or right handle should be used for each trial. EEG was recorded and processed from bilateral frontal-central-parietal regions (30 channels). We trained LDA classifiers using data from trials performed by one limb and tested the classification accuracy using data from trials performed by the contralateral limb. We found that the type of hand-object interaction can be decoded with approximately 59 and 69% peak accuracy in the planning and execution stages, respectively. Interestingly, the decoding accuracy of the reaching directions was dependent on how EEG channels in the testing dataset were spatially mirrored, and whether directions were labeled in the extrinsic (object-centered) or intrinsic (body-centered) coordinates.
Collapse
Affiliation(s)
- Kevin Hooks
- Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United States
| | - Refaat El-Said
- College of Medicine, University of Central Florida, Orlando, FL, United States
| | - Qiushi Fu
- Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United States
- Biionix Cluster, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
9
|
Zhang R, Liu G, Wen Y, Zhou W. Self-attention-based convolutional neural network and time-frequency common spatial pattern for enhanced motor imagery classification. J Neurosci Methods 2023; 398:109953. [PMID: 37611877 DOI: 10.1016/j.jneumeth.2023.109953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/20/2023] [Accepted: 08/19/2023] [Indexed: 08/25/2023]
Abstract
BACKGROUND Motor imagery (MI) based brain-computer interfaces (BCIs) have promising potentials in the field of neuro-rehabilitation. However, due to individual variations in active brain regions during MI tasks, the challenge of decoding MI EEG signals necessitates improved classification performance for practical application. NEW METHOD This study proposes a self-attention-based Convolutional Neural Network (CNN) in conjunction with a time-frequency common spatial pattern (TFCSP) for enhanced MI classification. Due to the limited availability of training data, a data augmentation strategy is employed to expand the scale of MI EEG datasets. The self-attention-based CNN is trained to automatically extract the temporal and spatial information from EEG signals, allowing the self-attention module to select active channels by calculating EEG channel weights. TFCSP is further implemented to extract multiscale time-frequency-space features from EEG data. Finally, the EEG features derived from TFCSP are concatenated with those from the self-attention-based CNN for MI classification. RESULTS The proposed method is evaluated on two publicly accessible datasets, BCI Competition IV IIa and BCI Competition III IIIa, yielding mean accuracies of 79.28 % and 86.39 %, respectively. CONCLUSIONS Compared with state-of-the-art methods, our approach achieves superior classification results in accuracy. Self-attention-based CNN combining with TFCSP can make full use of the time-frequency-space information of EEG, and enhance the classification performance.
Collapse
Affiliation(s)
- Rui Zhang
- School of Microelectronics, Shandong University, Jinan 250100, China
| | - Guoyang Liu
- School of Microelectronics, Shandong University, Jinan 250100, China
| | - Yiming Wen
- School of Microelectronics, Shandong University, Jinan 250100, China
| | - Weidong Zhou
- School of Microelectronics, Shandong University, Jinan 250100, China.
| |
Collapse
|
10
|
Farabbi A, Figueiredo P, Ghiringhelli F, Mainardi L, Sanches JM, Moreno P, Santos-Victor J, Vourvopoulos A. Investigating the impact of visual perspective in a motor imagery-based brain-robot interaction: A pilot study with healthy participants. FRONTIERS IN NEUROERGONOMICS 2023; 4:1080794. [PMID: 38234500 PMCID: PMC10790830 DOI: 10.3389/fnrgo.2023.1080794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 03/08/2023] [Indexed: 01/19/2024]
Abstract
Introduction Motor Imagery (MI)-based Brain Computer Interfaces (BCI) have raised gained attention for their use in rehabilitation therapies since they allow controlling an external device by using brain activity, in this way promoting brain plasticity mechanisms that could lead to motor recovery. Specifically, rehabilitation robotics can provide precision and consistency for movement exercises, while embodied robotics could provide sensory feedback that can help patients improve their motor skills and coordination. However, it is still not clear whether different types of visual feedback may affect the elicited brain response and hence the effectiveness of MI-BCI for rehabilitation. Methods In this paper, we compare two visual feedback strategies based on controlling the movement of robotic arms through a MI-BCI system: 1) first-person perspective, with visual information that the user receives when they view the robot arms from their own perspective; and 2) third-person perspective, whereby the subjects observe the robot from an external perspective. We studied 10 healthy subjects over three consecutive sessions. The electroencephalographic (EEG) signals were recorded and evaluated in terms of the power of the sensorimotor rhythms, as well as their lateralization, and spatial distribution. Results Our results show that both feedback perspectives can elicit motor-related brain responses, but without any significant differences between them. Moreover, the evoked responses remained consistent across all sessions, showing no significant differences between the first and the last session. Discussion Overall, these results suggest that the type of perspective may not influence the brain responses during a MI- BCI task based on a robotic feedback, although, due to the limited sample size, more evidence is required. Finally, this study resulted into the production of 180 labeled MI EEG datasets, publicly available for research purposes.
Collapse
Affiliation(s)
- Andrea Farabbi
- B3Lab, Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Milan, Italy
| | - Patricia Figueiredo
- Institute for Systems and Robotics-Lisboa, Instituto Superior Tecnico, Lisbon, Portugal
| | - Fabiola Ghiringhelli
- B3Lab, Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Milan, Italy
| | - Luca Mainardi
- B3Lab, Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Milan, Italy
| | - Joao Miguel Sanches
- Institute for Systems and Robotics-Lisboa, Instituto Superior Tecnico, Lisbon, Portugal
| | - Plinio Moreno
- Institute for Systems and Robotics-Lisboa, Instituto Superior Tecnico, Lisbon, Portugal
| | - Jose Santos-Victor
- Institute for Systems and Robotics-Lisboa, Instituto Superior Tecnico, Lisbon, Portugal
| | | |
Collapse
|
11
|
McFadden J. Consciousness: Matter or EMF? Front Hum Neurosci 2023; 16:1024934. [PMID: 36741784 PMCID: PMC9889563 DOI: 10.3389/fnhum.2022.1024934] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 12/28/2022] [Indexed: 01/20/2023] Open
Abstract
Conventional theories of consciousness (ToCs) that assume that the substrate of consciousness is the brain's neuronal matter fail to account for fundamental features of consciousness, such as the binding problem. Field ToC's propose that the substrate of consciousness is the brain's best accounted by some kind of field in the brain. Electromagnetic (EM) ToCs propose that the conscious field is the brain's well-known EM field. EM-ToCs were first proposed only around 20 years ago primarily to account for the experimental discovery that synchronous neuronal firing was the strongest neural correlate of consciousness (NCC). Although EM-ToCs are gaining increasing support, they remain controversial and are often ignored by neurobiologists and philosophers and passed over in most published reviews of consciousness. In this review I examine EM-ToCs against established criteria for distinguishing between ToCs and demonstrate that they outperform all conventional ToCs and provide novel insights into the nature of consciousness as well as a feasible route toward building artificial consciousnesses.
Collapse
|
12
|
Isokinetic Rehabilitation Trajectory Planning of an Upper Extremity Exoskeleton Rehabilitation Robot Based on a Multistrategy Improved Whale Optimization Algorithm. Symmetry (Basel) 2023. [DOI: 10.3390/sym15010232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
Upper extremity exoskeleton rehabilitation robots have become a significant piece of rehabilitation equipment, and planning their motion trajectories is essential in patient rehabilitation. In this paper, a multistrategy improved whale optimization algorithm (MWOA) is proposed for trajectory planning of upper extremity exoskeleton rehabilitation robots with emphasis on isokinetic rehabilitation. First, a piecewise polynomial was used to construct a rough trajectory. To make the trajectory conform to human-like movement, a whale optimization algorithm (WOA) was employed to generate a bounded jerk trajectory with the minimum running time as the objective. The search performance of the WOA under complex constraints, including the search capability of trajectory planning symmetry, was improved by the following strategies: a dual-population search, including a new communication mechanism to prevent falling into the local optimum; a mutation centroid opposition-based learning, to improve the diversity of the population; and an adaptive inertia weight, to balance exploration and exploitation. Simulation analysis showed that the MWOA generated a trajectory with a shorter run-time and better symmetry and robustness than the WOA. Finally, a pilot rehabilitation session on a healthy volunteer using an upper extremity exoskeleton rehabilitation robot was completed safely and smoothly along the trajectory planned by the MWOA. The proposed algorithm thus provides a feasible scheme for isokinetic rehabilitation trajectory planning of upper extremity exoskeleton rehabilitation robots.
Collapse
|
13
|
A Review of Brain Activity and EEG-Based Brain-Computer Interfaces for Rehabilitation Application. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 9:bioengineering9120768. [PMID: 36550974 PMCID: PMC9774292 DOI: 10.3390/bioengineering9120768] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/12/2022]
Abstract
Patients with severe CNS injuries struggle primarily with their sensorimotor function and communication with the outside world. There is an urgent need for advanced neural rehabilitation and intelligent interaction technology to provide help for patients with nerve injuries. Recent studies have established the brain-computer interface (BCI) in order to provide patients with appropriate interaction methods or more intelligent rehabilitation training. This paper reviews the most recent research on brain-computer-interface-based non-invasive rehabilitation systems. Various endogenous and exogenous methods, advantages, limitations, and challenges are discussed and proposed. In addition, the paper discusses the communication between the various brain-computer interface modes used between severely paralyzed and locked patients and the surrounding environment, particularly the brain-computer interaction system utilizing exogenous (induced) EEG signals (such as P300 and SSVEP). This discussion reveals with an examination of the interface for collecting EEG signals, EEG components, and signal postprocessing. Furthermore, the paper describes the development of natural interaction strategies, with a focus on signal acquisition, data processing, pattern recognition algorithms, and control techniques.
Collapse
|
14
|
EEG-Based Empathic Safe Cobot. MACHINES 2022. [DOI: 10.3390/machines10080603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
An empathic collaborative robot (cobot) was realized through the transmission of fear from a human agent to a robot agent. Such empathy was induced through an electroencephalographic (EEG) sensor worn by the human agent, thus realizing an empathic safe brain-computer interface (BCI). The empathic safe cobot reacts to the fear and in turn transmits it to the human agent, forming a social circle of empathy and safety. A first randomized, controlled experiment involved two groups of 50 healthy subjects (100 total subjects) to measure the EEG signal in the presence or absence of a frightening event. The second randomized, controlled experiment on two groups of 50 different healthy subjects (100 total subjects) exposed the subjects to comfortable and uncomfortable movements of a collaborative robot (cobot) while the subjects’ EEG signal was acquired. The result was that a spike in the subject’s EEG signal was observed in the presence of uncomfortable movement. The questionnaires were distributed to the subjects, and confirmed the results of the EEG signal measurement. In a controlled laboratory setting, all experiments were found to be statistically significant. In the first experiment, the peak EEG signal measured just after the activating event was greater than the resting EEG signal (p < 10−3). In the second experiment, the peak EEG signal measured just after the uncomfortable movement of the cobot was greater than the EEG signal measured under conditions of comfortable movement of the cobot (p < 10−3). In conclusion, within the isolated and constrained experimental environment, the results were satisfactory.
Collapse
|
15
|
Niu J, Jiang N. Pseudo-online detection and classification for upper-limb movements. J Neural Eng 2022; 19. [PMID: 35688127 DOI: 10.1088/1741-2552/ac77be] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Accepted: 06/10/2022] [Indexed: 02/08/2023]
Abstract
Objective. This study analyzed detection (movement vs. non-movement) and classification (different types of movements) to decode upper-limb movement volitions in a pseudo-online fashion.Approach. Nine healthy subjects executed four self-initiated movements: left wrist extension, right wrist extension, left index finger extension, and right index finger extension. For detection, we investigated the performance of three individual classifiers (support vector machine (SVM), EEGNET, and Riemannian geometry featured SVM) on three frequency bands (0.05-5 Hz, 5-40 Hz, 0.05-40 Hz). The best frequency band and the best classifier combinations were constructed to realize an ensemble processing pipeline using majority voting. For classification, we used adaptive boosted Riemannian geometry model to differentiate contra-lateral and ipsilateral movements.Main results. The ensemble model achieved 79.6 ± 8.8% true positive rate and 3.1 ± 1.2 false positives per minute with 75.3 ± 112.6 ms latency on a pseudo-online detection task. The following classification gave around 67% accuracy to differentiate contralateral movements.Significance. The newly proposed ensemble method and pseudo-online testing procedure could provide a robust brain-computer interface design for movement decoding.
Collapse
Affiliation(s)
- Jiansheng Niu
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Ning Jiang
- National Clinical Research Center for Geriatric, West China Hospital Sichuan University, Chengdu, Sichuan, People's Republic of China.,Med-X Center for Manufacturing, Sichuan University, Chengdu, Sichuan, People's Republic of China
| |
Collapse
|
16
|
Hari Hara Nithin Reddy M. Brain Computer Interface Drone. ARTIF INTELL 2022. [DOI: 10.5772/intechopen.97558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Brain-Computer Interface has emerged from dazzling experiments of cognitive scientists and researchers who dig deep into the conscious of the human brain where neuroscience, signal processing, machine learning, physical sciences are blended together and neuroprosthesis, neuro spellers, bionic eyes, prosthetic arms, prosthetic legs are created which made the disabled to walk, a mute to express and talk, a blind to see the beautiful world, a deaf to hear, etc. My main aim is to analyze the frequency domain signal of the brain signals of 5 subjects at their respective mental states using an EEG and show how to control a DJI Tello drone using Insight EEG then present the results and interpretation of band power graph, FFT graph and time-domain signals graph of mental commands during the live control of the drone.
Collapse
|
17
|
De la Cruz-Sánchez BA, Arias-Montiel M, Lugo-González E. EMG-controlled hand exoskeleton for assisted bilateral rehabilitation. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
18
|
Toward a Brain-Computer Interface- and Internet of Things-Based Smart Ward Collaborative System Using Hybrid Signals. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6894392. [PMID: 35480157 PMCID: PMC9038386 DOI: 10.1155/2022/6894392] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 03/26/2022] [Indexed: 11/24/2022]
Abstract
This study proposes a brain-computer interface (BCI)- and Internet of Things (IoT)-based smart ward collaborative system using hybrid signals. The system is divided into hybrid asynchronous electroencephalography (EEG)-, electrooculography (EOG)- and gyro-based BCI control system and an IoT monitoring and management system. The hybrid BCI control system proposes a GUI paradigm with cursor movement. The user uses the gyro to control the cursor area selection and uses blink-related EOG to control the cursor click. Meanwhile, the attention-related EEG signals are classified based on a support-vector machine (SVM) to make the final judgment. The judgment of the cursor area and the judgment of the attention state are reduced, thereby reducing the false operation rate in the hybrid BCI system. The accuracy in the hybrid BCI control system was 96.65 ± 1.44%, and the false operation rate and command response time were 0.89 ± 0.42 events/min and 2.65 ± 0.48 s, respectively. These results show the application potential of the hybrid BCI control system in daily tasks. In addition, we develop an architecture to connect intelligent things in a smart ward based on narrowband Internet of Things (NB-IoT) technology. The results demonstrate that our system provides superior communication transmission quality.
Collapse
|
19
|
Identification of Lower-Limb Motor Tasks via Brain–Computer Interfaces: A Topical Overview. SENSORS 2022; 22:s22052028. [PMID: 35271175 PMCID: PMC8914806 DOI: 10.3390/s22052028] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 02/11/2022] [Accepted: 02/23/2022] [Indexed: 02/01/2023]
Abstract
Recent engineering and neuroscience applications have led to the development of brain–computer interface (BCI) systems that improve the quality of life of people with motor disabilities. In the same area, a significant number of studies have been conducted in identifying or classifying upper-limb movement intentions. On the contrary, few works have been concerned with movement intention identification for lower limbs. Notwithstanding, lower-limb neurorehabilitation is a major topic in medical settings, as some people suffer from mobility problems in their lower limbs, such as those diagnosed with neurodegenerative disorders, such as multiple sclerosis, and people with hemiplegia or quadriplegia. Particularly, the conventional pattern recognition (PR) systems are one of the most suitable computational tools for electroencephalography (EEG) signal analysis as the explicit knowledge of the features involved in the PR process itself is crucial for both improving signal classification performance and providing more interpretability. In this regard, there is a real need for outline and comparative studies gathering benchmark and state-of-art PR techniques that allow for a deeper understanding thereof and a proper selection of a specific technique. This study conducted a topical overview of specialized papers covering lower-limb motor task identification through PR-based BCI/EEG signal analysis systems. To do so, we first established search terms and inclusion and exclusion criteria to find the most relevant papers on the subject. As a result, we identified the 22 most relevant papers. Next, we reviewed their experimental methodologies for recording EEG signals during the execution of lower limb tasks. In addition, we review the algorithms used in the preprocessing, feature extraction, and classification stages. Finally, we compared all the algorithms and determined which of them are the most suitable in terms of accuracy.
Collapse
|
20
|
Schultz JR, Slifkin AB, Schearer EM. Controlling an effector with eye movements: The effect of entangled sensory and motor responsibilities. PLoS One 2022; 17:e0263440. [PMID: 35113943 PMCID: PMC8812848 DOI: 10.1371/journal.pone.0263440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 01/20/2022] [Indexed: 11/19/2022] Open
Abstract
Restoring arm and hand function has been indicated by individuals with tetraplegia as one of the most important factors for regaining independence. The overall goal of our research is to develop assistive technologies that allow individuals with tetraplegia to control functional reaching movements. This study served as an initial step toward our overall goal by assessing the feasibility of using eye movements to control the motion of an effector in an experimental environment. We aimed to understand how additional motor requirements placed on the eyes affected eye-hand coordination during functional reaching. We were particularly interested in how eye fixation error was affected when the sensory and motor functions of the eyes were entangled due to the additional motor responsibility. We recorded participants’ eye and hand movements while they reached for targets on a monitor. We presented a cursor at the participant’s point of gaze position which can be thought of as being similar to the control of an assistive robot arm. To measure eye fixation error, we used an offline filter to extract eye fixations from the raw eye movement data. We compared the fixations to the locations of the targets presented on the monitor. The results show that not only are humans able to use eye movements to direct the cursor to a desired location (1.04 ± 0.15 cm), but they can do so with error similar to that of the hand (0.84 ± 0.05 cm). In other words, despite the additional motor responsibility placed on the eyes during direct eye-movement control of an effector, the ability to coordinate functional reaching movements was unaffected. The outcomes of this study support the efficacy of using the eyes as a direct command input for controlling movement.
Collapse
Affiliation(s)
- John R. Schultz
- Mechanical Engineering/Center for Human Machine Systems, Cleveland State University, Cleveland, Ohio, United States of America
- * E-mail:
| | - Andrew B. Slifkin
- Department of Psychology, Cleveland State University, Cleveland, Ohio, United States of America
| | - Eric M. Schearer
- Mechanical Engineering/Center for Human Machine Systems, Cleveland State University, Cleveland, Ohio, United States of America
| |
Collapse
|
21
|
Arif A, Jawad Khan M, Javed K, Sajid H, Rubab S, Naseer N, Irfan Khan T. Hemodynamic Response Detection Using Integrated EEG-fNIRS-VPA for BCI. COMPUTERS, MATERIALS & CONTINUA 2022; 70:535-555. [DOI: 10.32604/cmc.2022.018318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 04/21/2021] [Indexed: 09/01/2023]
|
22
|
Emerging trends in BCI-robotics for motor control and rehabilitation. CURRENT OPINION IN BIOMEDICAL ENGINEERING 2021. [DOI: 10.1016/j.cobme.2021.100354] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
23
|
Babič J, Laffranchi M, Tessari F, Verstraten T, Novak D, Šarabon N, Ugurlu B, Peternel L, Torricelli D, Veneman JF. Challenges and solutions for application and wider adoption of wearable robots. WEARABLE TECHNOLOGIES 2021; 2:e14. [PMID: 38486636 PMCID: PMC10936284 DOI: 10.1017/wtc.2021.13] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 08/25/2021] [Accepted: 09/18/2021] [Indexed: 03/17/2024]
Abstract
The science and technology of wearable robots are steadily advancing, and the use of such robots in our everyday life appears to be within reach. Nevertheless, widespread adoption of wearable robots should not be taken for granted, especially since many recent attempts to bring them to real-life applications resulted in mixed outcomes. The aim of this article is to address the current challenges that are limiting the application and wider adoption of wearable robots that are typically worn over the human body. We categorized the challenges into mechanical layout, actuation, sensing, body interface, control, human-robot interfacing and coadaptation, and benchmarking. For each category, we discuss specific challenges and the rationale for why solving them is important, followed by an overview of relevant recent works. We conclude with an opinion that summarizes possible solutions that could contribute to the wider adoption of wearable robots.
Collapse
Affiliation(s)
- Jan Babič
- Laboratory for Neuromechanics and Biorobotics, Department of Automation, Biocybernetics and Robotics, Jožef Stefan Institute, Ljubljana, Slovenia
| | - Matteo Laffranchi
- Rehab Technologies Lab, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Federico Tessari
- Rehab Technologies Lab, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Tom Verstraten
- Robotics & Multibody Mechanics Research Group, Vrije Universiteit Brussel and Flanders Make, Brussels, Belgium
| | - Domen Novak
- University of Wyoming, Laramie, Wyoming, USA
| | - Nejc Šarabon
- Faculty of Health Sciences, University of Primorska, Izola, Slovenia
| | - Barkan Ugurlu
- Biomechatronics Laboratory, Faculty of Engineering, Ozyegin University, Istanbul, Turkey
| | - Luka Peternel
- Delft Haptics Lab, Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| | - Diego Torricelli
- Cajal Institute, Spanish National Research Council, Madrid, Spain
| | | |
Collapse
|
24
|
Altaheri H, Muhammad G, Alsulaiman M, Amin SU, Altuwaijri GA, Abdul W, Bencherif MA, Faisal M. Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: a review. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06352-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
25
|
Decoding the torque of lower limb joints from EEG recordings of pre-gait movements using a machine learning scheme. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
26
|
Zhang X, Li H, Lu Z, Yin G. Homology Characteristics of EEG and EMG for Lower Limb Voluntary Movement Intention. Front Neurorobot 2021; 15:642607. [PMID: 34220479 PMCID: PMC8249921 DOI: 10.3389/fnbot.2021.642607] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 03/31/2021] [Indexed: 11/13/2022] Open
Abstract
In the field of lower limb exoskeletons, besides its electromechanical system design and control, attention has been paid to realizing the linkage of exoskeleton robots to humans via electroencephalography (EEG) and electromyography (EMG). However, even the state of the art performance of lower limb voluntary movement intention decoding still faces many obstacles. In the following work, focusing on the perspective of the inner mechanism, a homology characteristic of EEG and EMG for lower limb voluntary movement intention was conducted. A mathematical model of EEG and EMG was built based on its mechanism, which consists of a neural mass model (NMM), neuromuscular junction model, EMG generation model, decoding model, and musculoskeletal biomechanical model. The mechanism analysis and simulation results demonstrated that EEG and EMG signals were both excited by the same movement intention with a response time difference. To assess the efficiency of the proposed model, a synchronous acquisition system for EEG and EMG was constructed to analyze the homology and response time difference from EEG and EMG signals in the limb movement intention. An effective method of wavelet coherence was used to analyze the internal correlation between EEG and EMG signals in the same limb movement intention. To further prove the effectiveness of the hypothesis in this paper, six subjects were involved in the experiments. The experimental results demonstrated that there was a strong EEG-EMG coherence at 1 Hz around movement onset, and the phase of EEG was leading the EMG. Both the simulation and experimental results revealed that EEG and EMG are homologous, and the response time of the EEG signals are earlier than EMG signals during the limb movement intention. This work can provide a theoretical basis for the feasibility of EEG-based pre-perception and fusion perception of EEG and EMG in human movement detection.
Collapse
Affiliation(s)
- Xiaodong Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China.,Shaanxi Key Laboratory of Intelligent Robots, Xi'an Jiaotong University, Xi'an, China
| | - Hanzhe Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Zhufeng Lu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Gui Yin
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
27
|
Qiu S, Guo W, Zha F, Deng J, Wang X. Exoskeleton Active Walking Assistance Control Framework Based on Frequency Adaptive Dynamics Movement Primitives. Front Neurorobot 2021; 15:672582. [PMID: 34093160 PMCID: PMC8173117 DOI: 10.3389/fnbot.2021.672582] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 04/21/2021] [Indexed: 11/16/2022] Open
Abstract
This paper introduces a novel exoskeleton active walking assistance control framework based on frequency adaptive dynamics movement primitives (FADMPs). The FADMPs proposed in this paper is an online learning and prediction algorithm which is able to online estimate the fundamental frequency of human joint trajectory, learn the shape of joint trajectory and predict the future joint trajectory during walking. The proposed active walking assistance control framework based on FADMPs is a model-based controller which relies on the human joint torque estimation. The assistance torque provided by exoskeleton is estimated by human lower limb inverse dynamics model which is sensitive to the noise in the joint motion trajectory. To estimate a smooth joint torque profile, the joint motion trajectory must be filtered first by a lowpass filter. However, lowpass filter will introduce an inevitable phase delay in the filtered trajectory. Both simulations and experiments in this paper show that the phase delay has a significant effect on the performance of exoskeleton active assistance. The active assistant control framework based on FADMPs aims at improving the performance of active assistance control by compensating the phase delay. Both simulations and experiments on active walking assistance control show that the performance of active assistance control can be further improved when the phase delay in the filtered trajectory is compensated by FADMPs.
Collapse
Affiliation(s)
- Shiyin Qiu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Wei Guo
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Fusheng Zha
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.,Robotics Institute, Shenzhen Academy of Aerospace Technology, Shenzhen, China
| | - Jing Deng
- Robotics Institute, Shenzhen Academy of Aerospace Technology, Shenzhen, China
| | - Xin Wang
- Robotics Institute, Shenzhen Academy of Aerospace Technology, Shenzhen, China
| |
Collapse
|
28
|
Converging Robotic Technologies in Targeted Neural Rehabilitation: A Review of Emerging Solutions and Challenges. SENSORS 2021; 21:s21062084. [PMID: 33809721 PMCID: PMC8002299 DOI: 10.3390/s21062084] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 03/05/2021] [Accepted: 03/11/2021] [Indexed: 11/17/2022]
Abstract
Recent advances in the field of neural rehabilitation, facilitated through technological innovation and improved neurophysiological knowledge of impaired motor control, have opened up new research directions. Such advances increase the relevance of existing interventions, as well as allow novel methodologies and technological synergies. New approaches attempt to partially overcome long-term disability caused by spinal cord injury, using either invasive bridging technologies or noninvasive human-machine interfaces. Muscular dystrophies benefit from electromyography and novel sensors that shed light on underlying neuromotor mechanisms in people with Duchenne. Novel wearable robotics devices are being tailored to specific patient populations, such as traumatic brain injury, stroke, and amputated individuals. In addition, developments in robot-assisted rehabilitation may enhance motor learning and generate movement repetitions by decoding the brain activity of patients during therapy. This is further facilitated by artificial intelligence algorithms coupled with faster electronics. The practical impact of integrating such technologies with neural rehabilitation treatment can be substantial. They can potentially empower nontechnically trained individuals-namely, family members and professional carers-to alter the programming of neural rehabilitation robotic setups, to actively get involved and intervene promptly at the point of care. This narrative review considers existing and emerging neural rehabilitation technologies through the perspective of replacing or restoring functions, enhancing, or improving natural neural output, as well as promoting or recruiting dormant neuroplasticity. Upon conclusion, we discuss the future directions for neural rehabilitation research, diagnosis, and treatment based on the discussed technologies and their major roadblocks. This future may eventually become possible through technological evolution and convergence of mutually beneficial technologies to create hybrid solutions.
Collapse
|
29
|
A low-cost transradial prosthesis controlled by the intention of muscular contraction. Phys Eng Sci Med 2021; 44:229-241. [PMID: 33469856 DOI: 10.1007/s13246-021-00972-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 01/07/2021] [Indexed: 10/22/2022]
Abstract
Persons with upper-limb amputations face severe problems due to a reduction in their ability to perform the activities of daily living. The prosthesis controlled by electromyography (EMG) or other signals from sensors, switches, accelerometers, etc., can somewhat regain the lost capability of such individuals. However, there are several issues with these prostheses, such as expensive cost, limited functionality, unnatural control, slow operating speed, complexity, heavyweight, large size, etc. This paper proposes an affordable transradial prosthesis, controlled by the muscular contractions from user intention. A surface EMG sensor was explicitly fabricated for capturing the muscle contraction information from the residual forearm of subjects with amputation. An under actuated 3D printed hand was developed with a prosthetic socket assembly to attach the remaining upper-limb of such subjects. The hand integrates an intuitive closed-loop control system that receives reference input from the designed sensor and feedback input from a force sensor installed at the thumb tip. The performance of the EMG sensor was compared with that of a traditional sensor in detecting muscle contractions from the subjects. The designed sensor showed a good correlation (r > 0.93) and a better signal-to-noise ratio (SNR) feature to the conventional sensor. Further, a successful trial of the developed hand prosthesis was made on five different subjects with transradial amputation. The users wearing the hand prototype were able to perform faster and delicate grasping of various objects. The implemented control system allowed the prosthesis users to control the grasp force of hand fingers with their intention of muscular contractions.
Collapse
|
30
|
Paek AY, Brantley JA, Sujatha Ravindran A, Nathan K, He Y, Eguren D, Cruz-Garza JG, Nakagome S, Wickramasuriya DS, Chang J, Rashed-Al-Mahfuz M, Amin MR, Bhagat NA, Contreras-Vidal JL. A Roadmap Towards Standards for Neurally Controlled End Effectors. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2021; 2:84-90. [PMID: 35402986 PMCID: PMC8979628 DOI: 10.1109/ojemb.2021.3059161] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 12/24/2020] [Accepted: 02/09/2021] [Indexed: 12/02/2022] Open
Abstract
The control and manipulation of various types of end effectors such as powered exoskeletons, prostheses, and ‘neural’ cursors by brain-machine interface (BMI) systems has been the target of many research projects. A seamless “plug and play” interface between any BMI and end effector is desired, wherein similar user's intent cause similar end effectors to behave identically. This report is based on the outcomes of an IEEE Standards Association Industry Connections working group on End Effectors for Brain-Machine Interfacing that convened to identify and address gaps in the existing standards for BMI-based solutions with a focus on the end-effector component. A roadmap towards standardization of end effectors for BMI systems is discussed by identifying current device standards that are applicable for end effectors. While current standards address basic electrical and mechanical safety, and to some extent, performance requirements, several gaps exist pertaining to unified terminologies, data communication protocols, patient safety and risk mitigation.
Collapse
Affiliation(s)
| | - Justin A Brantley
- University of Houston Houston TX 77204 USA
- Department of BioengineeringUniversity of Pennsylvania Philadelphia PA 19104 USA
| | | | | | | | | | - Jesus G Cruz-Garza
- University of Houston Houston TX 77204 USA
- Department of Design and Environmental AnalysisCornell University Ithaca NY 14853 USA
| | | | | | | | - Md Rashed-Al-Mahfuz
- University of Houston Houston TX 77204 USA
- Department of Computer Science and EngineeringUniversity of Rajshahi Rajshahi 6205 Bangladesh
| | | | - Nikunj A Bhagat
- University of Houston Houston TX 77204 USA
- Feinstein Institutes for Medical Research Manhasset NY 11030 USA
| | | |
Collapse
|
31
|
Gannouni S, Belwafi K, Aboalsamh H, AlSamhan Z, Alebdi B, Almassad Y, Alobaedallah H. EEG-Based BCI System to Detect Fingers Movements. Brain Sci 2020; 10:brainsci10120965. [PMID: 33321915 PMCID: PMC7763179 DOI: 10.3390/brainsci10120965] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 11/24/2020] [Accepted: 12/08/2020] [Indexed: 11/16/2022] Open
Abstract
The advancement of assistive technologies toward the restoration of the mobility of paralyzed and/or amputated limbs will go a long way. Herein, we propose a system that adopts the brain-computer interface technology to control prosthetic fingers with the use of brain signals. To predict the movements of each finger, complex electroencephalogram (EEG) signal processing algorithms should be applied to remove the outliers, extract features, and be able to handle separately the five human fingers. The proposed method deals with a multi-class classification problem. Our machine learning strategy to solve this problem is built on an ensemble of one-class classifiers, each of which is dedicated to the prediction of the intention to move a specific finger. Regions of the brain that are sensitive to the movements of the fingers are identified and located. The average accuracy of the proposed EEG signal processing chain reached 81% for five subjects. Unlike the majority of existing prototypes that allow only one single finger to be controlled and only one movement to be performed at a time, the system proposed will enable multiple fingers to perform movements simultaneously. Although the proposed system classifies five tasks, the obtained accuracy is too high compared with a binary classification system. The proposed system contributes to the advancement of a novel prosthetic solution that allows people with severe disabilities to perform daily tasks in an easy manner.
Collapse
|
32
|
Chen X, Huang X, Wang Y, Gao X. Combination of Augmented Reality Based Brain- Computer Interface and Computer Vision for High-Level Control of a Robotic Arm. IEEE Trans Neural Syst Rehabil Eng 2020; 28:3140-3147. [PMID: 33196442 DOI: 10.1109/tnsre.2020.3038209] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Recent advances in robotics, neuroscience, and signal processing make it possible to operate a robot through electroencephalography (EEG)-based brain-computer interface (BCI). Although some successful attempts have been made in recent years, the practicality of the entire system still has much room for improvement. The present study designed and realized a robotic arm control system by combing augmented reality (AR), computer vision, and steady-state visual evoked potential (SSVEP)-BCI. AR environment was implemented by a Microsoft HoloLens. Flickering stimuli for eliciting SSVEPs were presented on the HoloLens, which allowed users to see both the robotic arm and the user interface of the BCI. Thus users did not need to switch attention between the visual stimulator and the robotic arm. A four-command SSVEP-BCI was built for users to choose the specific object to be operated by the robotic arm. Once an object was selected, the computer vision would provide the location and color of the object in the workspace. Subsequently, the object was autonomously picked up and placed by the robotic arm. According to the online results obtained from twelve participants, the mean classification accuracy of the proposed system was 93.96 ± 5.05%. Moreover, all subjects could utilize the proposed system to successfully pick and place objects in a specific order. These results demonstrated the potential of combining AR-BCI and computer vision to control robotic arms, which is expected to further promote the practicality of BCI-controlled robots.
Collapse
|
33
|
Wei P, Zhang J, Wei P, Wang B, Hong J. Different sEMG and EEG Features Analysis for Gait phase Recognition. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1002-1006. [PMID: 33018154 DOI: 10.1109/embc44109.2020.9175655] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This research focuses on the gait phase recognition using different sEMG and EEG features. Seven healthy volunteers, 23-26 years old, were enrolled in this experiment. Seven phases of gait were divided by three-dimensional trajectory of lower limbs during treadmill walking and classified by Library for Support Vector Machines (LIBSVM). These gait phases include loading response, mid-stance, terminal Stance, pre-swing, initial swing, mid-swing, and terminal swing. Different sEMG and EEG features were assessed in this study. Gait phases of three kinds of walking speed were analyzed. Results showed that the slope sign change (SSC) and mean power frequency (MPF) of sEMG signals and SSC of EEG signals achieved higher accuracy of gait phase recognition than other features, and the accuracy are 95.58% (1.4 km/h), 97.63% (2.0 km/h) and 98.10% (2.6 km/h) respectively. Furthermore, the accuracy of gait phase recognition in the speed of 2.6 km/h is better than other walking speeds.
Collapse
|
34
|
Liang C, Hsiao T. Walking Strategies and Performance Evaluation for Human-Exoskeleton Systems under Admittance Control. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20154346. [PMID: 32759803 PMCID: PMC7436263 DOI: 10.3390/s20154346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 07/28/2020] [Accepted: 07/30/2020] [Indexed: 06/11/2023]
Abstract
Lower-limb exoskeletons as walking assistive devices have been intensively investigated in recent decades. In these studies, intention detection and performance evaluation are important topics. In our previous studies, we proposed a disturbance observer (DOB)-based torque estimation algorithm and an admittance control law to shape the admittance of the human-exoskeleton system (HES) and comply with the user's walking intention. These algorithms have been experimentally verified under the condition of no ground reaction force (GRF) in our previous studies. In this paper, we devised and integrated with the exoskeleton control system a sensing and communication module on each foot to measure and compensate for GRF. Rigorous theoretical analysis was performed and the sufficient conditions for the robust stability of the closed-loop system were derived. Then, we conducted level ground assistive walking repeatedly with different test subjects and exhaustive combinations of admittance parameters. In addition, we proposed two tractable and physically insightful performance indices called normalized energy consumption index (NECI) and walking distance in a fixed period of time to quantitatively evaluate the performance for different admittance parameters. We also compared the energy consumption for users walking with and without the exoskeleton. The results show that the proposed admittance control law reduces the energy consumption of the user during level ground walking.
Collapse
|
35
|
Fusion of EEG and EMG signals for classification of unilateral foot movements. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101990] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
36
|
Lennon O, Tonellato M, Del Felice A, Di Marco R, Fingleton C, Korik A, Guanziroli E, Molteni F, Guger C, Otner R, Coyle D. A Systematic Review Establishing the Current State-of-the-Art, the Limitations, and the DESIRED Checklist in Studies of Direct Neural Interfacing With Robotic Gait Devices in Stroke Rehabilitation. Front Neurosci 2020; 14:578. [PMID: 32714127 PMCID: PMC7344195 DOI: 10.3389/fnins.2020.00578] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Accepted: 05/12/2020] [Indexed: 01/16/2023] Open
Abstract
Background: Stroke is a disease with a high associated disability burden. Robotic-assisted gait training offers an opportunity for the practice intensity levels associated with good functional walking outcomes in this population. Neural interfacing technology, electroencephalography (EEG), or electromyography (EMG) can offer new strategies for robotic gait re-education after a stroke by promoting more active engagement in movement intent and/or neurophysiological feedback. Objectives: This study identifies the current state-of-the-art and the limitations in direct neural interfacing with robotic gait devices in stroke rehabilitation. Methods: A pre-registered systematic review was conducted using standardized search operators that included the presence of stroke and robotic gait training and neural biosignals (EMG and/or EEG) and was not limited by study type. Results: From a total of 8,899 papers identified, 13 articles were considered for the final selection. Only five of the 13 studies received a strong or moderate quality rating as a clinical study. Three studies recorded EEG activity during robotic gait, two of which used EEG for BCI purposes. While demonstrating utility for decoding kinematic and EMG-related gait data, no EEG study has been identified to close the loop between robot and human. Twelve of the studies recorded EMG activity during or after robotic walking, primarily as an outcome measure. One study used multisource information fusion from EMG, joint angle, and force to modify robotic commands in real time, with higher error rates observed during active movement. A novel study identified used EMG data during robotic gait to derive the optimal, individualized robot-driven step trajectory. Conclusions: Wide heterogeneity in the reporting and the purpose of neurobiosignal use during robotic gait training after a stroke exists. Neural interfacing with robotic gait after a stroke demonstrates promise as a future field of study. However, as a nascent area, direct neural interfacing with robotic gait after a stroke would benefit from a more standardized protocol for biosignal collection and processing and for robotic deployment. Appropriate reporting for clinical studies of this nature is also required with respect to the study type and the participants' characteristics.
Collapse
Affiliation(s)
- Olive Lennon
- School of Public Health, Physiotherapy and Sports Science, University College Dublin, Dublin, Ireland
| | - Michele Tonellato
- Department of Neuroscience, Rehabilitation Unit, University of Padova, Padova, Italy
| | - Alessandra Del Felice
- Department of Neuroscience, NEUROMOVE-Rehab Laboratory, University of Padova, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
| | - Roberto Di Marco
- Department of Neuroscience, NEUROMOVE-Rehab Laboratory, University of Padova, Padova, Italy
| | - Caitriona Fingleton
- Department of Physiotherapy, Mater Misericordiae University Hospital, Dublin, Ireland
| | - Attila Korik
- Intelligent Systems Research Centre, School of Computing, Engineering and Intelligent Systems, Ulster University, Derry, United Kingdom
| | | | - Franco Molteni
- Villa Beretta Rehabilitation Center, Valduce Hospital, Costa Masnaga, Italy
| | | | - Rupert Otner
- g.tec Medical Engineering GmbH, Schiedlberg, Austria
| | - Damien Coyle
- Intelligent Systems Research Centre, School of Computing, Engineering and Intelligent Systems, Ulster University, Derry, United Kingdom
| |
Collapse
|
37
|
Gupta A, Singh A, Verma V, Mondal AK, Gupta MK. Developments and clinical evaluations of robotic exoskeleton technology for human upper-limb rehabilitation. Adv Robot 2020. [DOI: 10.1080/01691864.2020.1749926] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Affiliation(s)
- Akash Gupta
- Department of Mechanical Engineering, University of Petroleum and Energy Studies, Dehradun, India
| | - Anshuman Singh
- Department of Systems Engineering, University of Maryland, College Park, MD, USA
| | - Varnita Verma
- Department of Electrical and Electronics Engineering, University of Petroleum and Energy Studies, Dehradun, India
| | - Amit Kumar Mondal
- Department of Mechatronics Engineering, Manipal University, Dubai, UAE
| | - Mukul Kumar Gupta
- Department of Electrical and Electronics Engineering, University of Petroleum and Energy Studies, Dehradun, India
| |
Collapse
|
38
|
Zhang D, Yao L, Chen K, Wang S, Haghighi PD, Sullivan C. A Graph-Based Hierarchical Attention Model for Movement Intention Detection from EEG Signals. IEEE Trans Neural Syst Rehabil Eng 2019; 27:2247-2253. [PMID: 31562095 DOI: 10.1109/tnsre.2019.2943362] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
An EEG-based Brain-Computer Interface (BCI) is a system that enables a user to communicate with and intuitively control external devices solely using the user's intentions. Current EEG-based BCI research usually involves a subject-specific adaptation step before a BCI system is ready to be employed by a new user. However, the subject-independent scenario, in which a well-trained model can be directly applied to new users without pre-calibration, is particularly desirable yet rarely explored. Considering this critical gap, our focus in this paper is the subject-independent scenario of EEG-based human intention recognition. We present a G raph-based H ierarchical A ttention M odel (G-HAM) that utilizes the graph structure to represent the spatial information of EEG sensors and the hierarchical attention mechanism to focus on both the most discriminative temporal periods and EEG nodes. Extensive experiments on a large EEG dataset containing 105 subjects indicate that our model is capable of exploiting the underlying invariant EEG patterns across different subjects and generalizing the patterns to new subjects with better performance than a series of state-of-the-art and baseline approaches.
Collapse
|
39
|
Mounir Boudali A, Sinclair PJ, Manchester IR. Predicting Transitioning Walking Gaits: Hip and Knee Joint Trajectories From the Motion of Walking Canes. IEEE Trans Neural Syst Rehabil Eng 2019; 27:1791-1800. [PMID: 31398125 DOI: 10.1109/tnsre.2019.2933896] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In recent years, wearable exoskeletons and powered prosthetics have been considered key elements to remedy mobility loss. One of the main challenges pertaining to this field is the prediction of the wearer's desired motion. In this paper, we perform a human locomotion analysis, and we investigate the accuracy of predicting the angular position of the lower limb joints from the motion of walking canes. Nine healthy subjects took part of this study and performed a locomotor task that comprised straight walking on flat ground, stair ascent, and upright resting posture. Recurrent Neural Networks and polynomial fitting using Least Squares were used to model dynamic and static non-linear mappings, respectively, between the motion of a cane and its contra-lateral leg joints. A successful prediction of both the hip and knee joints was achieved using information from the cane only, and significant improvement of the prediction error was realized through the addition of data from the arm joints. Overall, Recurrent Neural Networks outperformed Least Squares for both joints' angular position prediction. When using the cane only, the static maps were able to predict steady behaviour but failed in predicting transitioning, as opposed to RNN, which was able to capture both steady behaviour and transitions.
Collapse
|