1
|
Xia H, Zhang Y, Rajabi N, Taleb F, Yang Q, Kragic D, Li Z. Shaping high-performance wearable robots for human motor and sensory reconstruction and enhancement. Nat Commun 2024; 15:1760. [PMID: 38409128 PMCID: PMC10897332 DOI: 10.1038/s41467-024-46249-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 02/19/2024] [Indexed: 02/28/2024] Open
Abstract
Most wearable robots such as exoskeletons and prostheses can operate with dexterity, while wearers do not perceive them as part of their bodies. In this perspective, we contend that integrating environmental, physiological, and physical information through multi-modal fusion, incorporating human-in-the-loop control, utilizing neuromuscular interface, employing flexible electronics, and acquiring and processing human-robot information with biomechatronic chips, should all be leveraged towards building the next generation of wearable robots. These technologies could improve the embodiment of wearable robots. With optimizations in mechanical structure and clinical training, the next generation of wearable robots should better facilitate human motor and sensory reconstruction and enhancement.
Collapse
Affiliation(s)
- Haisheng Xia
- School of Mechanical Engineering, Tongji University, Shanghai, 201804, China
- Translational Research Center, Shanghai YangZhi Rehabilitation Hospital (Shanghai Sunshine Rehabilitation Center), Tongji University, Shanghai, 201619, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, 230026, China
| | - Yuchong Zhang
- Robotics, Perception and Learning Lab, EECS at KTH Royal Institute of Technology Stockholm, 114 17, Stockholm, Sweden
| | - Nona Rajabi
- Robotics, Perception and Learning Lab, EECS at KTH Royal Institute of Technology Stockholm, 114 17, Stockholm, Sweden
| | - Farzaneh Taleb
- Robotics, Perception and Learning Lab, EECS at KTH Royal Institute of Technology Stockholm, 114 17, Stockholm, Sweden
| | - Qunting Yang
- Department of Automation, University of Science and Technology of China, Hefei, 230026, China
| | - Danica Kragic
- Robotics, Perception and Learning Lab, EECS at KTH Royal Institute of Technology Stockholm, 114 17, Stockholm, Sweden
| | - Zhijun Li
- School of Mechanical Engineering, Tongji University, Shanghai, 201804, China.
- Translational Research Center, Shanghai YangZhi Rehabilitation Hospital (Shanghai Sunshine Rehabilitation Center), Tongji University, Shanghai, 201619, China.
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, 230026, China.
| |
Collapse
|
2
|
Segas E, Mick S, Leconte V, Dubois O, Klotz R, Cattaert D, de Rugy A. Intuitive movement-based prosthesis control enables arm amputees to reach naturally in virtual reality. eLife 2023; 12:RP87317. [PMID: 37847150 PMCID: PMC10581689 DOI: 10.7554/elife.87317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2023] Open
Abstract
Impressive progress is being made in bionic limbs design and control. Yet, controlling the numerous joints of a prosthetic arm necessary to place the hand at a correct position and orientation to grasp objects remains challenging. Here, we designed an intuitive, movement-based prosthesis control that leverages natural arm coordination to predict distal joints missing in people with transhumeral limb loss based on proximal residual limb motion and knowledge of the movement goal. This control was validated on 29 participants, including seven with above-elbow limb loss, who picked and placed bottles in a wide range of locations in virtual reality, with median success rates over 99% and movement times identical to those of natural movements. This control also enabled 15 participants, including three with limb differences, to reach and grasp real objects with a robotic arm operated according to the same principle. Remarkably, this was achieved without any prior training, indicating that this control is intuitive and instantaneously usable. It could be used for phantom limb pain management in virtual reality, or to augment the reaching capabilities of invasive neural interfaces usually more focused on hand and grasp control.
Collapse
Affiliation(s)
- Effie Segas
- Univ. Bordeaux, CNRS, INCIA, UMR 5287BordeauxFrance
| | - Sébastien Mick
- Univ. Bordeaux, CNRS, INCIA, UMR 5287BordeauxFrance
- ISIR UMR 7222, Sorbonne Université, CNRS, InsermParisFrance
| | | | - Océane Dubois
- Univ. Bordeaux, CNRS, INCIA, UMR 5287BordeauxFrance
- ISIR UMR 7222, Sorbonne Université, CNRS, InsermParisFrance
| | | | | | | |
Collapse
|
3
|
Yang B, Chen X, Xiao X, Yan P, Hasegawa Y, Huang J. Gaze and Environmental Context-Guided Deep Neural Network and Sequential Decision Fusion for Grasp Intention Recognition. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3687-3698. [PMID: 37703142 DOI: 10.1109/tnsre.2023.3314503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
Grasp intention recognition plays a crucial role in controlling assistive robots to aid older people and individuals with limited mobility in restoring arm and hand function. Among the various modalities used for intention recognition, the eye-gaze movement has emerged as a promising approach due to its simplicity, intuitiveness, and effectiveness. Existing gaze-based approaches insufficiently integrate gaze data with environmental context and underuse temporal information, leading to inadequate intention recognition performance. The objective of this study is to eliminate the proposed deficiency and establish a gaze-based framework for object detection and its associated intention recognition. A novel gaze-based grasp intention recognition and sequential decision fusion framework (GIRSDF) is proposed. The GIRSDF comprises three main components: gaze attention map generation, the Gaze-YOLO grasp intention recognition model, and sequential decision fusion models (HMM, LSTM, and GRU). To evaluate the performance of GIRSDF, a dataset named Invisible containing data from healthy individuals and hemiplegic patients is established. GIRSDF is validated by trial-based and subject-based experiments on Invisible and outperforms the previous gaze-based grasp intention recognition methods. In terms of running efficiency, the proposed framework can run at a frequency of about 22 Hz, which ensures real-time grasp intention recognition. This study is expected to inspire additional gaze-related grasp intention recognition works.
Collapse
|
4
|
Yang S, Garg NP, Gao R, Yuan M, Noronha B, Ang WT, Accoto D. Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots. SENSORS (BASEL, SWITZERLAND) 2023; 23:2998. [PMID: 36991709 PMCID: PMC10056111 DOI: 10.3390/s23062998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 03/04/2023] [Accepted: 03/08/2023] [Indexed: 06/19/2023]
Abstract
The lack of intuitive and active human-robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing system comprising inertial measurement units (IMUs), electromyographic (EMG) sensors, and mechanomyography (MMG) sensors was implemented. This system was used to acquire kinematic and physiological signals during reaching and placing tasks performed by five healthy subjects. The onset motion data of each motion trial were extracted to input into traditional regression models and deep learning models for training and testing. The models can predict the position of the hand in planar space, which is the reference position for low-level position controllers. The results show that using IMU sensor with the proposed prediction model is sufficient for motion intention detection, which can provide almost the same prediction performance compared with adding EMG or MMG. Additionally, recurrent neural network (RNN)-based models can predict target positions over a short onset time window for reaching motions and are suitable for predicting targets over a longer horizon for placing tasks. This study's detailed analysis can improve the usability of the assistive/rehabilitation robots.
Collapse
Affiliation(s)
- Sibo Yang
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Neha P. Garg
- Rehabilitation Research Institute of Singapore (RRIS), Nanyang Technological University, Singapore 308232, Singapore
| | - Ruobin Gao
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Meng Yuan
- Rehabilitation Research Institute of Singapore (RRIS), Nanyang Technological University, Singapore 308232, Singapore
| | - Bernardo Noronha
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Wei Tech Ang
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
- Rehabilitation Research Institute of Singapore (RRIS), Nanyang Technological University, Singapore 308232, Singapore
| | - Dino Accoto
- Department of Mechanical Engineering, Robotics, Automation and Mechatronics Division, KU Leuven, 3590 Diepenbeek, Belgium
| |
Collapse
|
5
|
Shi P, Fang K, Yu H. Design and control of intelligent bionic artificial hand based on image recognition. Technol Health Care 2023; 31:21-35. [PMID: 35723126 DOI: 10.3233/thc-213320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
BACKGROUND At present, the popular control method for intelligent bionic prosthetic hands is EMG control. However, the control accuracy of this method is low. It is a trend to integrate computer vision into the prosthetic hand. OBJECTIVE The purpose of this paper is to design an intelligent prosthetic hand based on image recognition, improve the control accuracy and the quality of life of the disabled. METHODS Convolutional neural network is used to recognize the object to be grasped, and the recognition result is used as a trigger signal to control our intelligent prosthetic hand. We have designed a four-bar linkage mechanism and a side swing mechanism in the structure, which can not only achieve the flexion and extension of fingers but also realize the adduction and abduction of the four fingers and the lateral swing of the thumb. RESULTS Through the method of image recognition, the new intelligent bionic hand can achieve five kinds of Human action. Including grasp, side pinch, three-finger pinch, two-finger pinch, and pinch between fingers. CONCLUSIONS The experiment result proves that the precision of image recognition control is very excellent, the intelligent prosthetic hand can be completed the corresponding task.
Collapse
|
6
|
Qu J, Guo H, Wang W, Dang S. Prediction of Human-Computer Interaction Intention Based on Eye Movement and Electroencephalograph Characteristics. Front Psychol 2022; 13:816127. [PMID: 35496176 PMCID: PMC9039167 DOI: 10.3389/fpsyg.2022.816127] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 01/19/2022] [Indexed: 11/13/2022] Open
Abstract
In order to solve the problem of unsmooth and inefficient human-computer interaction process in the information age, a method for human-computer interaction intention prediction based on electroencephalograph (EEG) signals and eye movement signals is proposed. This approach is different from previous methods where researchers predict using data from human-computer interaction and a single physiological signal. This method uses the eye movements and EEG signals that clearly characterized the interaction intention as the prediction basis. In addition, this approach is not only tested with multiple human-computer interaction intentions, but also takes into account the operator in different cognitive states. The experimental results show that this method has some advantages over the methods proposed by other researchers. In Experiment 1, using the eye movement signal fixation point abscissa Position X (PX), fixation point ordinate Position Y (PY), and saccade amplitude (SA) to judge the interaction intention, the accuracy reached 92%, In experiment 2, only relying on the pupil diameter, pupil size (PS) and fixed time, fixed time (FD) of eye movement signals can not achieve higher accuracy of the operator’s cognitive state, so EEG signals are added. The cognitive state was identified separately by combining the screened EEG parameters Rα/β with the eye movement signal pupil diameter and fixation time, with an accuracy of 91.67%. The experimental combination of eye movement and EEG signal features can be used to predict the operator’s interaction intention and cognitive state.
Collapse
Affiliation(s)
- Jue Qu
- School of Aeronautics, Northwestern Polytechnical University, Xi'an, China.,Air and Missile Defense College, Air Force Engineering University, Xi'an, China
| | - Hao Guo
- Air and Missile Defense College, Air Force Engineering University, Xi'an, China
| | - Wei Wang
- Air and Missile Defense College, Air Force Engineering University, Xi'an, China
| | - Sina Dang
- Air and Missile Defense College, Air Force Engineering University, Xi'an, China
| |
Collapse
|
7
|
Bao T, Xie SQ, Yang P, Zhou P, Zhang ZQ. Towards Robust, Adaptive and Reliable Upper-limb Motion Estimation Using Machine Learning and Deep Learning--A Survey in Myoelectric Control. IEEE J Biomed Health Inform 2022; 26:3822-3835. [PMID: 35294368 DOI: 10.1109/jbhi.2022.3159792] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
To develop multi-functional human-machine interfaces that can help disabled people reconstruct lost functions of upper-limbs, machine learning (ML) and deep learning (DL) techniques have been widely implemented to decode human movement intentions from surface electromyography (sEMG) signals. However, due to the high complexity of upper-limb movements and the inherent non-stable characteristics of sEMG, the usability of ML/DL based control schemes is still greatly limited in practical scenarios. To this end, tremendous efforts have been made to improve model robustness, adaptation, and reliability. In this article, we provide a systematic review on recent achievements, mainly from three categories: multi-modal sensing fusion to gain additional information of the user, transfer learning (TL) methods to eliminate domain shift impacts on estimation models, and post-processing approaches to obtain more reliable outcomes. Special attention is given to fusion strategies, deep TL frameworks, and confidence estimation. \textcolor{red}{Research challenges and emerging opportunities, with respect to hardware development, public resources, and decoding strategies, are also analysed to provide perspectives for future developments.
Collapse
|
8
|
Karrenbach M, Boe D, Sie A, Bennett R, Rombokas E. Improving automatic control of upper-limb prosthesis wrists using gaze-centered eye tracking and deep learning. IEEE Trans Neural Syst Rehabil Eng 2022; 30:340-349. [PMID: 35100118 DOI: 10.1109/tnsre.2022.3147772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Many upper-limb prostheses lack proper wrist rotation functionality, leading to users performing poor compensatory strategies, leading to overuse or abandonment. In this study, we investigate the validity of creating and implementing a data-driven predictive control strategy in object grasping tasks performed in virtual reality. We propose the idea of using gaze-centered vision to predict the wrist rotations of a user and implement a user study to investigate the impact of using this predictive control. We demonstrate that using this vision-based predictive system leads to a decrease in compensatory movement in the shoulder, as well as task completion time. We discuss the cases in which the virtual prosthesis with the predictive model implemented did and did not make a physical improvement in various arm movements. We also discuss the cognitive value in implementing such predictive control strategies into prosthetic controllers. We find that gaze-centered vision provides information about the intent of the user when performing object reaching and that the performance of prosthetic hands improves greatly when wrist prediction is implemented. Lastly, we address the limitations of this study in the context of both the study itself as well as any future physical implementations.
Collapse
|
9
|
Lotti N, Xiloyannis M, Missiroli F, Bokranz C, Chiaradia D, Frisoli A, Riener R, Masia L. Myoelectric or Force Control? A Comparative Study on a Soft Arm Exosuit. IEEE T ROBOT 2022. [DOI: 10.1109/tro.2021.3137748] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
10
|
Park H, Kim S, Nussbaum MA, Srinivasan D. Effects of using a whole-body powered exoskeleton during simulated occupational load-handling tasks: A pilot study. APPLIED ERGONOMICS 2022; 98:103589. [PMID: 34563748 DOI: 10.1016/j.apergo.2021.103589] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 09/10/2021] [Accepted: 09/13/2021] [Indexed: 06/13/2023]
Abstract
Whole-body powered exoskeletons (WB-PEXOs) can be effective in reducing the physical demands of heavy occupational work, yet almost no empirical evidence exists on the effects of WB-PEXO use. This study assessed the effects of WB-PEXO use on back and leg muscle activities during lab-based simulations of load handling tasks. Six participants (4M, 2F) completed two such tasks (load carriage and stationary load transfer), both with and without a WB-PEXO, and with a range of load masses in each task. WB-PEXO use reduced median levels of muscle activity in the back (∼42-53% in thoracic and ∼24-43% in lumbar regions) and legs (∼41-63% in knee flexors and extensors), and mainly when handling loads beyond low-moderate levels (10-15 kg). Overall, using the WB-PEXO also reduced inter-individual variance (smaller SD) in muscle activities. Future work should examine diverse users, focus on finding effective matches between WB-PEXO use and specific tasks, and identify applications in varied work environments.
Collapse
Affiliation(s)
- Hanjun Park
- Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, USA
| | - Sunwook Kim
- Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, USA
| | - Maury A Nussbaum
- Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, USA
| | - Divya Srinivasan
- Department of Industrial Engineering, Clemson University, Clemson, SC, USA.
| |
Collapse
|
11
|
Mouchoux J, Bravo-Cabrera MA, Dosen S, Schilling AF, Markovic M. Impact of Shared Control Modalities on Performance and Usability of Semi-autonomous Prostheses. Front Neurorobot 2021; 15:768619. [PMID: 34975446 PMCID: PMC8718752 DOI: 10.3389/fnbot.2021.768619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/22/2021] [Indexed: 11/13/2022] Open
Abstract
Semi-autonomous (SA) control of upper-limb prostheses can improve the performance and decrease the cognitive burden of a user. In this approach, a prosthesis is equipped with additional sensors (e.g., computer vision) that provide contextual information and enable the system to accomplish some tasks automatically. Autonomous control is fused with a volitional input of a user to compute the commands that are sent to the prosthesis. Although several promising prototypes demonstrating the potential of this approach have been presented, methods to integrate the two control streams (i.e., autonomous and volitional) have not been systematically investigated. In the present study, we implemented three shared control modalities (i.e., sequential, simultaneous, and continuous) and compared their performance, as well as the cognitive and physical burdens imposed on the user. In the sequential approach, the volitional input disabled the autonomous control. In the simultaneous approach, the volitional input to a specific degree of freedom (DoF) activated autonomous control of other DoFs, whereas in the continuous approach, autonomous control was always active except for the DoFs controlled by the user. The experiment was conducted in ten able-bodied subjects, and these subjects used an SA prosthesis to perform reach-and-grasp tasks while reacting to audio cues (dual tasking). The results demonstrated that, compared to the manual baseline (volitional control only), all three SA modalities accomplished the task in a shorter time and resulted in less volitional control input. The simultaneous SA modality performed worse than the sequential and continuous SA approaches. When systematic errors were introduced in the autonomous controller to generate a mismatch between the goals of the user and controller, the performance of SA modalities substantially decreased, even below the manual baseline. The sequential SA scheme was the least impacted one in terms of errors. The present study demonstrates that a specific approach for integrating volitional and autonomous control is indeed an important factor that significantly affects the performance and physical and cognitive load, and therefore these should be considered when designing SA prostheses.
Collapse
Affiliation(s)
- Jérémy Mouchoux
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Miguel A. Bravo-Cabrera
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Strahinja Dosen
- Faculty of Medicine, Department of Health Science and Technology Center for Sensory-Motor Interaction, Aalborg University, Aalborg, Denmark
| | - Arndt F. Schilling
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Marko Markovic
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| |
Collapse
|
12
|
Crocher V, Singh R, Newn J, Oetomo D. Towards a Gaze-Informed Movement Intention Model for Robot-Assisted Upper-Limb Rehabilitation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6155-6158. [PMID: 34892521 DOI: 10.1109/embc46164.2021.9629610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Gaze-based intention detection has been explored for robotic-assisted neuro-rehabilitation in recent years. As eye movements often precede hand movements, robotic devices can use gaze information to augment the detection of movement intention in upper-limb rehabilitation. However, due to the likely practical drawbacks of using head-mounted eye trackers and the limited generalisability of the algorithms, gaze-informed approaches have not yet been used in clinical practice.This paper introduces a preliminary model for a gazeinformed movement intention that separates the intention spatial component obtained from the gaze from the time component obtained from movement. We leverage the latter to isolate the relevant gaze information happening just before the movement initiation. We evaluated our approach with six healthy individuals using an experimental setup that employed a screen-mounted eye-tracker. The results showed a prediction accuracy of 60% and 73% for an arbitrary target choice and an imposed target choice, respectively.From these findings, we expect that the model could 1) generalise better to individuals with movement impairment (by not considering movement direction), 2) allow a generalisation to more complex, multi-stage actions including several submovements, and 3) facilitate a more natural human-robot interactions and empower patients with the agency to decide movement onset. Overall, the paper demonstrates the potential for using gaze-movement model and the use of screen-based eye trackers for robot-assisted upper-limb rehabilitation.
Collapse
|
13
|
Zhu B, Zhang D, Chu Y, Zhao X, Zhang L, Zhao L. Face-Computer Interface (FCI): Intent Recognition Based on Facial Electromyography (fEMG) and Online Human-Computer Interface With Audiovisual Feedback. Front Neurorobot 2021; 15:692562. [PMID: 34335220 PMCID: PMC8322851 DOI: 10.3389/fnbot.2021.692562] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 06/21/2021] [Indexed: 11/13/2022] Open
Abstract
Patients who have lost limb control ability, such as upper limb amputation and high paraplegia, are usually unable to take care of themselves. Establishing a natural, stable, and comfortable human-computer interface (HCI) for controlling rehabilitation assistance robots and other controllable equipments will solve a lot of their troubles. In this study, a complete limbs-free face-computer interface (FCI) framework based on facial electromyography (fEMG) including offline analysis and online control of mechanical equipments was proposed. Six facial movements related to eyebrows, eyes, and mouth were used in this FCI. In the offline stage, 12 models, eight types of features, and three different feature combination methods for model inputing were studied and compared in detail. In the online stage, four well-designed sessions were introduced to control a robotic arm to complete drinking water task in three ways (by touch screen, by fEMG with and without audio feedback) for verification and performance comparison of proposed FCI framework. Three features and one model with an average offline recognition accuracy of 95.3%, a maximum of 98.8%, and a minimum of 91.4% were selected for use in online scenarios. In contrast, the way with audio feedback performed better than that without audio feedback. All subjects completed the drinking task in a few minutes with FCI. The average and smallest time difference between touch screen and fEMG under audio feedback were only 1.24 and 0.37 min, respectively.
Collapse
Affiliation(s)
- Bo Zhu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China.,Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Daohui Zhang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China.,Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - Yaqi Chu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China.,Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Xingang Zhao
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China.,Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - Lixin Zhang
- Rehabilitation Center, Shengjing Hospital of China Medical University, Shenyang, China
| | - Lina Zhao
- Rehabilitation Center, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
14
|
Koochaki F, Najafizadeh L. A Data-Driven Framework for Intention Prediction via Eye Movement With Applications to Assistive Systems. IEEE Trans Neural Syst Rehabil Eng 2021; 29:974-984. [PMID: 34038364 DOI: 10.1109/tnsre.2021.3083815] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Fast and accurate human intention prediction can significantly advance the performance of assistive devices for patients with limited motor or communication abilities. Among available modalities, eye movement can be valuable for inferring the user's intention, as it can be tracked non-invasively. However, existing limited studies in this domain do not provide the level of accuracy required for the reliable operation of assistive systems. By taking a data-driven approach, this paper presents a new framework that utilizes the spatial and temporal patterns of eye movement along with deep learning to predict the user's intention. In the proposed framework, the spatial patterns of gaze are identified by clustering the gaze points based on their density over displayed images in order to find the regions of interest (ROIs). The temporal patterns of gaze are identified via hidden Markov models (HMMs) to find the transition sequence between ROIs. Transfer learning is utilized to identify the objects of interest in the displayed images. Finally, models are developed to predict the user's intention after completing the task as well as at early stages of the task. The proposed framework is evaluated in an experiment involving predicting intended daily-life activities. Results indicate that an average classification accuracy of 97.42% is achieved, which is considerably higher than existing gaze-based intention prediction studies.
Collapse
|