1
|
Lin C, Yan X, Fu Z, Leng Y, Fu C. Empowering High-Level Spinal Cord Injury Patients in Daily Tasks With a Hybrid Gaze and FEMG-Controlled Assistive Robotic System. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2983-2992. [PMID: 39137070 DOI: 10.1109/tnsre.2024.3443073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/15/2024]
Abstract
Individuals with high-level spinal cord injuries often face significant challenges in performing essential daily tasks due to their motor impairments. Consequently, the development of reliable, hands-free human-computer interfaces (HCI) for assistive devices is vital for enhancing their quality of life. However, existing methods, including eye-tracking and facial electromyogram (FEMG) control, have demonstrated limitations in stability and efficiency. To address these shortcomings, this paper presents an innovative hybrid control system that seamlessly integrates gaze and FEMG signals. When deployed as a hybrid HCI, this system has been successfully used to assist individuals with high-level spinal cord injuries in performing activities of daily living (ADLs), including tasks like eating, pouring water, and pick-and-place. Importantly, our experimental results confirm that our hybrid control method expedites the performance in pick-place tasks, achieving an average completion time of 34.3 s, which denotes a 28.8% and 21.8% improvement over pure gaze-based control and pure FEMG-based control, respectively. With practice, participants experienced up to a 44% efficiency improvement using the hybrid control method. This state-of-the-art system offers a highly precise and reliable intention interface, suitable for daily use by individuals with high-level spinal cord injuries, ultimately enhancing their quality of life and independence.
Collapse
|
2
|
de Freitas MP, Piai VA, Farias RH, Fernandes AMR, de Moraes Rossetto AG, Leithardt VRQ. Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:8531. [PMID: 36366227 PMCID: PMC9658699 DOI: 10.3390/s22218531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 10/27/2022] [Accepted: 10/31/2022] [Indexed: 06/16/2023]
Abstract
According to the World Health Organization, about 15% of the world's population has some form of disability. Assistive Technology, in this context, contributes directly to the overcoming of difficulties encountered by people with disabilities in their daily lives, allowing them to receive education and become part of the labor market and society in a worthy manner. Assistive Technology has made great advances in its integration with Artificial Intelligence of Things (AIoT) devices. AIoT processes and analyzes the large amount of data generated by Internet of Things (IoT) devices and applies Artificial Intelligence models, specifically, machine learning, to discover patterns for generating insights and assisting in decision making. Based on a systematic literature review, this article aims to identify the machine-learning models used across different research on Artificial Intelligence of Things applied to Assistive Technology. The survey of the topics approached in this article also highlights the context of such research, their application, the IoT devices used, and gaps and opportunities for further development. The survey results show that 50% of the analyzed research address visual impairment, and, for this reason, most of the topics cover issues related to computational vision. Portable devices, wearables, and smartphones constitute the majority of IoT devices. Deep neural networks represent 81% of the machine-learning models applied in the reviewed research.
Collapse
Affiliation(s)
| | - Vinícius Aquino Piai
- School of Sea, Science and Technology, University of the Itajaí Valley, Itajaí 88302-901, Brazil
| | - Ricardo Heffel Farias
- School of Sea, Science and Technology, University of the Itajaí Valley, Itajaí 88302-901, Brazil
| | - Anita M. R. Fernandes
- School of Sea, Science and Technology, University of the Itajaí Valley, Itajaí 88302-901, Brazil
| | | | - Valderi Reis Quietinho Leithardt
- COPELABS, Lusófona University of Humanities and Technologies, Campo Grande 376, 1749-024 Lisboa, Portugal
- VALORIZA, Research Center for Endogenous Resources Valorization, Instituto Politécnico de Portalegre, 7300-555 Portalegre, Portugal
| |
Collapse
|
3
|
Tang Z, Yu H, Yang H, Zhang L, Zhang L. Effect of velocity and acceleration in joint angle estimation for an EMG-Based upper-limb exoskeleton control. Comput Biol Med 2021; 141:105156. [PMID: 34942392 DOI: 10.1016/j.compbiomed.2021.105156] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 12/14/2021] [Accepted: 12/16/2021] [Indexed: 01/29/2023]
Abstract
Most studies on estimating user's joint angles to control upper-limb exoskeleton have focused on using surface electromyogram (sEMG) signals. However, the variations in limb velocity and acceleration can affect the sEMG data and decrease the angle estimation performance in the practical use of the exoskeleton. This paper demonstrated that the variations in elbow angular velocity (EAV) and elbow angular acceleration (EAA) associated with normal use led to a large effect on the elbow joint angle estimation. To minimize this effect, we proposed two methods: (1) collecting sEMG data of multiple EAVs and EAAs as training data and (2) measuring the values of EAV and EAA with a gyroscope. A self-developed upper-limb exoskeleton with pneumatic muscles was used in the online control phase to verify our methods' effectiveness. The predicted elbow angle from the sEMG-angle models which were trained in the offline estimation phase was transferred to control signal of the pneumatic muscles to actuate the exoskeleton to move to the same angle. In the offline estimation phase, the average root mean square error (RMSE) between predicted elbow angle and actual elbow angle was reduced from 22.54° to 10.01° (using method one) and to 6.45° (using method two), respectively; in the online control phase, method two achieved a best control performance (average RMSE = 6.87°). The results showed that using multi-sensor fusion (sEMG sensors and gyroscope) achieved a better estimation performance than using only sEMG sensor, which was helpful to eliminate the velocity and acceleration effect in real-time joint angle estimation for upper-limb exoskeleton control.
Collapse
Affiliation(s)
- Zhichuan Tang
- Industrial Design Institute, Zhejiang University of Technology, Hangzhou, 310023, China; Modern Industrial Design Institute, Zhejiang University, Hangzhou, 310007, China.
| | - Hongnian Yu
- School of Engineering and the Built Environment, Edinburgh Napier University, EH10 5DT, UK
| | - Hongchun Yang
- Industrial Design Institute, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Lekai Zhang
- Industrial Design Institute, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Lufang Zhang
- Industrial Design Institute, Zhejiang University of Technology, Hangzhou, 310023, China
| |
Collapse
|
4
|
IMU-Based Hand Gesture Interface Implementing a Sequence-Matching Algorithm for the Control of Assistive Technologies. SIGNALS 2021. [DOI: 10.3390/signals2040043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Assistive technologies (ATs) often have a high-dimensionality of possible movements (e.g., assistive robot with several degrees of freedom or a computer), but the users have to control them with low-dimensionality sensors and interfaces (e.g., switches). This paper presents the development of an open-source interface based on a sequence-matching algorithm for the control of ATs. Sequence matching allows the user to input several different commands with low-dimensionality sensors by not only recognizing their output, but also their sequential pattern through time, similarly to Morse code. In this paper, the algorithm is applied to the recognition of hand gestures, inputted using an inertial measurement unit worn by the user. An SVM-based algorithm, that is aimed to be robust, with small training sets (e.g., five examples per class) is developed to recognize gestures in real-time. Finally, the interface is applied to control a computer’s mouse and keyboard. The interface was compared against (and combined with) the head movement-based AssystMouse software. The hand gesture interface showed encouraging results for this application but could also be used with other body parts (e.g., head and feet) and could control various ATs (e.g., assistive robotic arm and prosthesis).
Collapse
|
5
|
Esposito D, Centracchio J, Andreozzi E, Gargiulo GD, Naik GR, Bifulco P. Biosignal-Based Human-Machine Interfaces for Assistance and Rehabilitation: A Survey. SENSORS 2021; 21:s21206863. [PMID: 34696076 PMCID: PMC8540117 DOI: 10.3390/s21206863] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/30/2021] [Accepted: 10/12/2021] [Indexed: 12/03/2022]
Abstract
As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal-based HMIs for assistance and rehabilitation to outline state-of-the-art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full-text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever-growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complexity, so their usefulness should be carefully evaluated for the specific application.
Collapse
Affiliation(s)
- Daniele Esposito
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Jessica Centracchio
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Emilio Andreozzi
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Gaetano D. Gargiulo
- School of Engineering, Design and Built Environment, Western Sydney University, Penrith, NSW 2747, Australia;
- The MARCS Institute, Western Sydney University, Penrith, NSW 2751, Australia
| | - Ganesh R. Naik
- School of Engineering, Design and Built Environment, Western Sydney University, Penrith, NSW 2747, Australia;
- The Adelaide Institute for Sleep Health, Flinders University, Bedford Park, SA 5042, Australia
- Correspondence:
| | - Paolo Bifulco
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| |
Collapse
|
6
|
De Santis D. A Framework for Optimizing Co-adaptation in Body-Machine Interfaces. Front Neurorobot 2021; 15:662181. [PMID: 33967733 PMCID: PMC8097093 DOI: 10.3389/fnbot.2021.662181] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Accepted: 03/22/2021] [Indexed: 11/13/2022] Open
Abstract
The operation of a human-machine interface is increasingly often referred to as a two-learners problem, where both the human and the interface independently adapt their behavior based on shared information to improve joint performance over a specific task. Drawing inspiration from the field of body-machine interfaces, we take a different perspective and propose a framework for studying co-adaptation in scenarios where the evolution of the interface is dependent on the users' behavior and that do not require task goals to be explicitly defined. Our mathematical description of co-adaptation is built upon the assumption that the interface and the user agents co-adapt toward maximizing the interaction efficiency rather than optimizing task performance. This work describes a mathematical framework for body-machine interfaces where a naïve user interacts with an adaptive interface. The interface, modeled as a linear map from a space with high dimension (the user input) to a lower dimensional feedback, acts as an adaptive “tool” whose goal is to minimize transmission loss following an unsupervised learning procedure and has no knowledge of the task being performed by the user. The user is modeled as a non-stationary multivariate Gaussian generative process that produces a sequence of actions that is either statistically independent or correlated. Dependent data is used to model the output of an action selection module concerned with achieving some unknown goal dictated by the task. The framework assumes that in parallel to this explicit objective, the user is implicitly learning a suitable but not necessarily optimal way to interact with the interface. Implicit learning is modeled as use-dependent learning modulated by a reward-based mechanism acting on the generative distribution. Through simulation, the work quantifies how the system evolves as a function of the learning time scales when a user learns to operate a static vs. an adaptive interface. We show that this novel framework can be directly exploited to readily simulate a variety of interaction scenarios, to facilitate the exploration of the parameters that lead to optimal learning dynamics of the joint system, and to provide an empirical proof for the superiority of human-machine co-adaptation over user adaptation.
Collapse
Affiliation(s)
- Dalia De Santis
- Department of Robotics, Brain and Cognitive Sciences, Center for Human Technologies, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
7
|
Lebrasseur A, Lettre J, Routhier F, Archambault PS, Campeau-Lecours A. Assistive robotic arm: Evaluation of the performance of intelligent algorithms. Assist Technol 2021; 33:95-104. [PMID: 31070524 DOI: 10.1080/10400435.2019.1601649] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Abstract
People with upper body disabilities may be limited in their activities of daily living. Robotic arms, such as JACO, are assistive devices that could improve their abilities, independent living, and social participation. However, performing complex tasks with JACO can be time-consuming or tedious. Therefore, some advanced functionalities have been developed to enhance the performance of users. The main objective of this study is to evaluate the performance, in terms of ease of use, task completion time, and participants' perception of usability, of three new algorithms applied to the JACO robotic arm: (1) predefined position, (2) fluidity filter, and (3) drinking mode. The secondary objective is to evaluate differences in performance variables between proportional and non-proportional control modes. Fourteen participants with upper body disabilities completed various tasks with and without these functionalities. Using JACO with the algorithms led to a significant decrease of up to 72% in task completion time and improvements of 2.3 and 2.9 on a 7-point Likert scale for perceived ease of use and usability, respectively. There was no significant difference between control modes. Our results demonstrate that algorithms could produce significant improvements in performing daily living activities.
Collapse
Affiliation(s)
- Audrey Lebrasseur
- Department of Rehabilitation, Université Laval, Quebec City, QC, Canada.,Centre for Interdisciplinary Research in Rehabilitation and Social Integration, Centre intégré universitaire de santé et de services sociaux de la Capitale-Nationale, Institut de réadaptation en déficience physique de Québec, Quebec City, QC, Canada
| | - Josiane Lettre
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration, Centre intégré universitaire de santé et de services sociaux de la Capitale-Nationale, Institut de réadaptation en déficience physique de Québec, Quebec City, QC, Canada
| | - François Routhier
- Department of Rehabilitation, Université Laval, Quebec City, QC, Canada.,Centre for Interdisciplinary Research in Rehabilitation and Social Integration, Centre intégré universitaire de santé et de services sociaux de la Capitale-Nationale, Institut de réadaptation en déficience physique de Québec, Quebec City, QC, Canada
| | - Philippe S Archambault
- Interdisciplinary Research Center in Rehabilitation, Centre intégré de santé et de services sociaux de Laval, Laval, Canada.,School of Physical and Occupational Therapy, McGill University, Montréal, QC, Canada
| | - Alexandre Campeau-Lecours
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration, Centre intégré universitaire de santé et de services sociaux de la Capitale-Nationale, Institut de réadaptation en déficience physique de Québec, Quebec City, QC, Canada.,Department of Mechanical Engineering, Université Laval, Quebec City, QC, Canada
| |
Collapse
|
8
|
Wu L, Alqasemi R, Dubey R. Development of Smartphone-Based Human-Robot Interfaces for Individuals With Disabilities. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3010453] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
9
|
Li K, Zhang J, Wang L, Zhang M, Li J, Bao S. A review of the key technologies for sEMG-based human-robot interaction systems. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.102074] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
10
|
A Low-Cost, Wireless, 3-D-Printed Custom Armband for sEMG Hand Gesture Recognition. SENSORS 2019; 19:s19122811. [PMID: 31238529 PMCID: PMC6631507 DOI: 10.3390/s19122811] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 06/15/2019] [Accepted: 06/20/2019] [Indexed: 11/30/2022]
Abstract
Wearable technology can be employed to elevate the abilities of humans to perform demanding and complex tasks more efficiently. Armbands capable of surface electromyography (sEMG) are attractive and noninvasive devices from which human intent can be derived by leveraging machine learning. However, the sEMG acquisition systems currently available tend to be prohibitively costly for personal use or sacrifice wearability or signal quality to be more affordable. This work introduces the 3DC Armband designed by the Biomedical Microsystems Laboratory in Laval University; a wireless, 10-channel, 1000 sps, dry-electrode, low-cost (∼150 USD) myoelectric armband that also includes a 9-axis inertial measurement unit. The proposed system is compared with the Myo Armband by Thalmic Labs, one of the most popular sEMG acquisition systems. The comparison is made by employing a new offline dataset featuring 22 able-bodied participants performing eleven hand/wrist gestures while wearing the two armbands simultaneously. The 3DC Armband systematically and significantly (p<0.05) outperforms the Myo Armband, with three different classifiers employing three different input modalities when using ten seconds or more of training data per gesture. This new dataset, alongside the source code, Altium project and 3-D models are made readily available for download within a Github repository.
Collapse
|
11
|
Campeau-Lecours A, Cote-Allard U, Vu DS, Routhier F, Gosselin B, Gosselin C. Intuitive Adaptive Orientation Control for Enhanced Human–Robot Interaction. IEEE T ROBOT 2019. [DOI: 10.1109/tro.2018.2885464] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
12
|
Fall CL, Quevillon F, Blouin M, Latour S, Campeau-Lecours A, Gosselin C, Gosselin B. A Multimodal Adaptive Wireless Control Interface for People With Upper-Body Disabilities. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:564-575. [PMID: 29877820 DOI: 10.1109/tbcas.2018.2810256] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper describes a multimodal body-machine interface (BoMI) to help individuals with upper-limb disabilities using advanced assistive technologies, such as robotic arms. The proposed system uses a wearable and wireless body sensor network (WBSN) supporting up to six sensor nodes to measure the natural upper-body gesture of the users and translate it into control commands. Natural gesture of the head and upper-body parts, as well as muscular activity, are measured using inertial measurement units (IMUs) and surface electromyography (sEMG) using custom-designed multimodal wireless sensor nodes. An IMU sensing node is attached to a headset worn by the user. It has a size of 2.9 cm 2.9 cm, a maximum power consumption of 31 mW, and provides angular precision of 1. Multimodal patch sensor nodes, including both IMU and sEMG sensing modalities are placed over the user able-body parts to measure the motion and muscular activity. These nodes have a size of 2.5 cm 4.0 cm and a maximum power consumption of 11 mW. The proposed BoMI runs on a Raspberry Pi. It can adapt to several types of users through different control scenarios using the head and shoulder motion, as well as muscular activity, and provides a power autonomy of up to 24 h. JACO, a 6-DoF assistive robotic arm, is used as a testbed to evaluate the performance of the proposed BoMI. Ten able-bodied subjects performed ADLs while operating the AT device, using the Test d'Évaluation des Membres Supérieurs de Personnes Âgées to evaluate and compare the proposed BoMI with the conventional joystick controller. It is shown that the users can perform all tasks with the proposed BoMI, almost as fast as with the joystick controller, with only 30% time overhead on average, while being potentially more accessible to the upper-body disabled who cannot use the conventional joystick controller. Tests show that control performance with the proposed BoMI improved by up to 17% on average, after three trials.
Collapse
|