26
|
Yao K, Billard A. An inverse optimization approach to understand human acquisition of kinematic coordination in bimanual fine manipulation tasks. BIOLOGICAL CYBERNETICS 2020; 114:63-82. [PMID: 31907609 PMCID: PMC7062861 DOI: 10.1007/s00422-019-00814-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Accepted: 12/19/2019] [Indexed: 06/10/2023]
Abstract
Tasks that require the cooperation of both hands and arms are common in human everyday life. Coordination helps to synchronize in space and temporally motion of the upper limbs. In fine bimanual tasks, coordination enables also to achieve higher degrees of precision that could be obtained from a single hand. We studied the acquisition of bimanual fine manipulation skills in watchmaking tasks, which require assembly of pieces at millimeter scale. It demands years of training. We contrasted motion kinematics performed by novice apprentices to those of professionals. Fifteen subjects, ten novices and five experts, participated in the study. We recorded force applied on the watch face and kinematics of fingers and arms. Results indicate that expert subjects wisely place their fingers on the tools to achieve higher dexterity. Compared to novices, experts also tend to align task-demanded force application with the optimal force transmission direction of the dominant arm. To understand the cognitive processes underpinning the different coordination patterns across experts and novice subjects, we followed the optimal control theoretical framework and hypothesize that the difference in task performances is caused by changes in the central nervous system's optimal criteria. We formulated kinematic metrics to evaluate the coordination patterns and exploit inverse optimization approach to infer the optimal criteria. We interpret the human acquisition of novel coordination patterns as an alteration in the composition structure of the central nervous system's optimal criteria accompanied by the learning process.
Collapse
|
27
|
Barra B, Badi M, Perich MG, Conti S, Mirrazavi Salehian SS, Moreillon F, Bogaard A, Wurth S, Kaeser M, Passeraub P, Milekovic T, Billard A, Micera S, Capogrosso M. A versatile robotic platform for the design of natural, three-dimensional reaching and grasping tasks in monkeys. J Neural Eng 2019; 17:016004. [PMID: 31597123 DOI: 10.1088/1741-2552/ab4c77] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE Translational studies on motor control and neurological disorders require detailed monitoring of sensorimotor components of natural limb movements in relevant animal models. However, available experimental tools do not provide a sufficiently rich repertoire of behavioral signals. Here, we developed a robotic platform that enables the monitoring of kinematics, interaction forces, and neurophysiological signals during user-defined upper limb tasks for monkeys. APPROACH We configured the platform to position instrumented objects in a three-dimensional workspace and provide an interactive dynamic force-field. MAIN RESULTS We show the relevance of our platform for fundamental and translational studies with three example applications. First, we study the kinematics of natural grasp in response to variable interaction forces. We then show simultaneous and independent encoding of kinematic and forces in single unit intra-cortical recordings from sensorimotor cortical areas. Lastly, we demonstrate the relevance of our platform to develop clinically relevant brain computer interfaces in a kinematically unconstrained motor task. SIGNIFICANCE Our versatile control structure does not depend on the specific robotic arm used and allows for the design and implementation of a variety of tasks that can support both fundamental and translational studies of motor control.
Collapse
|
28
|
Yin H, Melo FS, Paiva A, Billard A. An ensemble inverse optimal control approach for robotic task learning and adaptation. Auton Robots 2019. [DOI: 10.1007/s10514-018-9757-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
29
|
Huber L, Billard A, Slotine JJ. Avoidance of Convex and Concave Obstacles With Convergence Ensured Through Contraction. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2893676] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
30
|
Zhakypov Z, Heremans F, Billard A, Paik J. An Origami-Inspired Reconfigurable Suction Gripper for Picking Objects With Variable Shape and Size. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2847403] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
31
|
Cohen L, Billard A. Social babbling: The emergence of symbolic gestures and words. Neural Netw 2018; 106:194-204. [PMID: 30081346 DOI: 10.1016/j.neunet.2018.06.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 06/20/2018] [Accepted: 06/27/2018] [Indexed: 10/28/2022]
Abstract
Language acquisition theories classically distinguish passive language understanding from active language production. However, recent findings show that brain areas such as Broca's region are shared in language understanding and production. Furthermore, these areas are also implicated in understanding and producing goal-oriented actions. These observations question the passive view of language development. In this work, we propose a cognitive developmental model of symbol acquisition, coherent with an active view of language learning. For that purpose, we introduce the concept of social babbling. In this view, symbols are learned in the same way as goal-oriented actions in the context of specific caregiver-infant interactions. We show that this model allows a virtual agent to learn both symbolic words and gestures to refer to objects while interacting with a caregiver. We validate our model by reproducing results from studies on the influence of parental responsiveness on infants language acquisition.
Collapse
|
32
|
Duarte NF, Rakovic M, Tasevski J, Coco MI, Billard A, Santos-Victor J. Action Anticipation: Reading the Intentions of Humans and Robots. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2861569] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
33
|
Salehian SSM, Billard A. A Dynamical-System-Based Approach for Controlling Robotic Manipulators During Noncontact/Contact Transitions. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2833142] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
34
|
Raffard S, Bortolon C, Cohen L, Khoramshahi M, Salesse RN, Billard A, Capdevielle D. Does this robot have a mind? Schizophrenia patients' mind perception toward humanoid robots. Schizophr Res 2018; 197:585-586. [PMID: 29203055 DOI: 10.1016/j.schres.2017.11.034] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/16/2017] [Revised: 11/26/2017] [Accepted: 11/27/2017] [Indexed: 11/18/2022]
|
35
|
Shavit Y, Figueroa N, Salehian SSM, Billard A. Learning Augmented Joint-Space Task-Oriented Dynamical Systems: A Linear Parameter Varying and Synergetic Control Approach. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2833497] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
36
|
Batzianoulis I, Krausz NE, Simon AM, Hargrove L, Billard A. Decoding the grasping intention from electromyography during reaching motions. J Neuroeng Rehabil 2018; 15:57. [PMID: 29940991 PMCID: PMC6020187 DOI: 10.1186/s12984-018-0396-5] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2017] [Accepted: 06/11/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Active upper-limb prostheses are used to restore important hand functionalities, such as grasping. In conventional approaches, a pattern recognition system is trained over a number of static grasping gestures. However, training a classifier in a static position results in lower classification accuracy when performing dynamic motions, such as reach-to-grasp. We propose an electromyography-based learning approach that decodes the grasping intention during the reaching motion, leading to a faster and more natural response of the prosthesis. METHODS AND RESULTS Eight able-bodied subjects and four individuals with transradial amputation gave informed consent and participated in our study. All the subjects performed reach-to-grasp motions for five grasp types, while the elecromyographic (EMG) activity and the extension of the arm were recorded. We separated the reach-to-grasp motion into three phases, with respect to the extension of the arm. A multivariate analysis of variance (MANOVA) on the muscular activity revealed significant differences among the motion phases. Additionally, we examined the classification performance on these phases. We compared the performance of three different pattern recognition methods; Linear Discriminant Analysis (LDA), Support Vector Machines (SVM) with linear and non-linear kernels, and an Echo State Network (ESN) approach. Our off-line analysis shows that it is possible to have high classification performance above 80% before the end of the motion when with three-grasp types. An on-line evaluation with an upper-limb prosthesis shows that the inclusion of the reaching motion in the training of the classifier importantly improves classification accuracy and enables the detection of grasp intention early in the reaching motion. CONCLUSIONS This method offers a more natural and intuitive control of prosthetic devices, as it will enable controlling grasp closure in synergy with the reaching motion. This work contributes to the decrease of delays between the user's intention and the device response and improves the coordination of the device with the motion of the arm.
Collapse
|
37
|
Mirrazavi Salehian SS, Figueroa N, Billard A. A unified framework for coordinated multi-arm motion planning. Int J Rob Res 2018. [DOI: 10.1177/0278364918765952] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Coordination is essential in the design of dynamic control strategies for multi-arm robotic systems. Given the complexity of the task and dexterity of the system, coordination constraints can emerge from different levels of planning and control. Primarily, one must consider task-space coordination, where the robots must coordinate with each other, with an object or with a target of interest. Coordination is also necessary in joint space, as the robots should avoid self-collisions at any time. We provide such joint-space coordination by introducing a centralized inverse kinematics (IK) solver under self-collision avoidance constraints, formulated as a quadratic program and solved in real-time. The space of free motion is modeled through a sparse non-linear kernel classification method in a data-driven learning approach. Moreover, we provide multi-arm task-space coordination for both synchronous or asynchronous behaviors. We define a synchronous behavior as that in which the robot arms must coordinate with each other and with a moving object such that they reach for it in synchrony. In contrast, an asynchronous behavior allows for each robot to perform independent point-to-point reaching motions. To transition smoothly from asynchronous to synchronous behaviors and vice versa, we introduce the notion of synchronization allocation. We show how this allocation can be controlled through an external variable, such as the location of the object to be manipulated. Both behaviors and their synchronization allocation are encoded in a single dynamical system. We validate our framework on a dual-arm robotic system and demonstrate that the robots can re-synchronize and adapt the motion of each arm while avoiding self-collision within milliseconds. The speed of control is exploited to intercept fast moving objects whose motion cannot be predicted accurately.
Collapse
|
38
|
Cohen L, Khoramshahi M, Salesse RN, Bortolon C, Słowiński P, Zhai C, Tsaneva-Atanasova K, Di Bernardo M, Capdevielle D, Marin L, Schmidt RC, Bardy BG, Billard A, Raffard S. Influence of facial feedback during a cooperative human-robot task in schizophrenia. Sci Rep 2017; 7:15023. [PMID: 29101325 PMCID: PMC5670132 DOI: 10.1038/s41598-017-14773-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Accepted: 10/05/2017] [Indexed: 01/28/2023] Open
Abstract
Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.
Collapse
|
39
|
de Chambrier G, Billard A. Non-Parametric Bayesian State Space Estimator for Negative Information. Front Robot AI 2017. [DOI: 10.3389/frobt.2017.00040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
40
|
|
41
|
Rey J, Kronander K, Farshidian F, Buchli J, Billard A. Learning motions from demonstrations and rewards with time-invariant dynamical systems based policies. Auton Robots 2017. [DOI: 10.1007/s10514-017-9636-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
42
|
Słowiński P, Alderisio F, Zhai C, Shen Y, Tino P, Bortolon C, Capdevielle D, Cohen L, Khoramshahi M, Billard A, Salesse R, Gueugnon M, Marin L, Bardy BG, di Bernardo M, Raffard S, Tsaneva-Atanasova K. Unravelling socio-motor biomarkers in schizophrenia. NPJ SCHIZOPHRENIA 2017; 3:8. [PMID: 28560254 PMCID: PMC5441525 DOI: 10.1038/s41537-016-0009-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2016] [Revised: 12/06/2016] [Accepted: 12/15/2016] [Indexed: 12/24/2022]
Abstract
We present novel, low-cost and non-invasive potential diagnostic biomarkers of schizophrenia. They are based on the 'mirror-game', a coordination task in which two partners are asked to mimic each other's hand movements. In particular, we use the patient's solo movement, recorded in the absence of a partner, and motion recorded during interaction with an artificial agent, a computer avatar or a humanoid robot. In order to discriminate between the patients and controls, we employ statistical learning techniques, which we apply to nonverbal synchrony and neuromotor features derived from the participants' movement data. The proposed classifier has 93% accuracy and 100% specificity. Our results provide evidence that statistical learning techniques, nonverbal movement coordination and neuromotor characteristics could form the foundation of decision support tools aiding clinicians in cases of diagnostic uncertainty.
Collapse
|
43
|
Erden MS, Billard A. Robotic Assistance by Impedance Compensation for Hand Movements While Manual Welding. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:2459-2472. [PMID: 26452294 DOI: 10.1109/tcyb.2015.2478656] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, we present a robotic assistance scheme which allows for impedance compensation with stiffness, damping, and mass parameters for hand manipulation tasks and we apply it to manual welding. The impedance compensation does not assume a preprogrammed hand trajectory. Rather, the intention of the human for the hand movement is estimated in real time using a smooth Kalman filter. The movement is restricted by compensatory virtual impedance in the directions perpendicular to the estimated direction of movement. With airbrush painting experiments, we test three sets of values for the impedance parameters as inspired from impedance measurements with manual welding. We apply the best of the tested sets for assistance in manual welding and perform welding experiments with professional and novice welders. We contrast three conditions: 1) welding with the robot's assistance; 2) with the robot when the robot is passive; and 3) welding without the robot. We demonstrate the effectiveness of the assistance through quantitative measures of both task performance and perceived user's satisfaction. The performance of both the novice and professional welders improves significantly with robotic assistance compared to welding with a passive robot. The assessment of user satisfaction shows that all novice and most professional welders appreciate the robotic assistance as it suppresses the tremors in the directions perpendicular to the movement for welding.
Collapse
|
44
|
|
45
|
Raffard S, Bortolon C, Khoramshahi M, Salesse RN, Burca M, Marin L, Bardy BG, Billard A, Macioce V, Capdevielle D. Humanoid robots versus humans: How is emotional valence of facial expressions recognized by individuals with schizophrenia? An exploratory study. Schizophr Res 2016; 176:506-513. [PMID: 27293136 DOI: 10.1016/j.schres.2016.06.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Revised: 05/31/2016] [Accepted: 06/01/2016] [Indexed: 11/17/2022]
Abstract
BACKGROUND The use of humanoid robots to play a therapeutic role in helping individuals with social disorders such as autism is a newly emerging field, but remains unexplored in schizophrenia. As the ability for robots to convey emotion appear of fundamental importance for human-robot interactions, we aimed to evaluate how schizophrenia patients recognize positive and negative facial emotions displayed by a humanoid robot. METHODS We included 21 schizophrenia outpatients and 17 healthy participants. In a reaction time task, they were shown photographs of human faces and of a humanoid robot (iCub) expressing either positive or negative emotions, as well as a non-social stimulus. Patients' symptomatology, mind perception, reaction time and number of correct answers were evaluated. RESULTS Results indicated that patients and controls recognized better and faster the emotional valence of facial expressions expressed by humans than by the robot. Participants were faster when responding to positive compared to negative human faces and inversely were faster for negative compared to positive robot faces. Importantly, participants performed worse when they perceived iCub as being capable of experiencing things (experience subscale of the mind perception questionnaire). In schizophrenia patients, negative correlations emerged between negative symptoms and both robot's and human's negative face accuracy. CONCLUSIONS Individuals do not respond similarly to human facial emotion and to non-anthropomorphic emotional signals. Humanoid robots have the potential to convey emotions to patients with schizophrenia, but their appearance seems of major importance for human-robot interactions.
Collapse
|
46
|
Hang K, Li M, Stork JA, Bekiroglu Y, Pokorny FT, Billard A, Kragic D. Hierarchical Fingertip Space: A Unified Framework for Grasp Planning and In-Hand Grasp Adaptation. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2588879] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
47
|
Khoramshahi M, Shukla A, Raffard S, Bardy BG, Billard A. Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction. PLoS One 2016; 11:e0156874. [PMID: 27281341 PMCID: PMC4900607 DOI: 10.1371/journal.pone.0156874] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Accepted: 05/22/2016] [Indexed: 11/18/2022] Open
Abstract
Background The ability to follow one another’s gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other’s hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader’s role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view). Methodology/Principal Findings 43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar’s realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower’s ability to predict the avatar’s action. An analysis of the pattern of frequency across the two players’ hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar. Conclusion/Significance This work confirms that people can exploit gaze cues to predict another person’s movements and to better coordinate their motions with their partners, even when the partner is a computer-animated avatar. Moreover, this study contributes further evidence that implementing biological features, here task-relevant gaze cues, enable the humanoid robotic avatar to appear more human-like, and thus increase the user’s sense of affiliation.
Collapse
|
48
|
Salehian SSM, Khoramshahi M, Billard A. A Dynamical System Approach for Softly Catching a Flying Object: Theory and Experiment. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2536749] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
49
|
El-Khoury S, Batzianoulis I, Antuvan CW, Contu S, Masia L, Micera S, Billard A. EMG-based learning approach for estimating wrist motion. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:6732-5. [PMID: 26737838 DOI: 10.1109/embc.2015.7319938] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper proposes an EMG based learning approach for estimating the displacement along the 2-axes (abduction/adduction and flexion/extension) of the human wrist in real-time. The algorithm extracts features from the EMG electrodes on the upper and forearm and uses Support Vector Regression to estimate the intended displacement of the wrist. Using data recorded with the arm outstretched in various locations in space, we train the algorithm so as to allow robust prediction even when the subject moves his/her arm across several positions in space. The proposed approach was tested on five healthy subjects and showed that a R(2) index of 63.6% is obtained for generalization across different arm positions and wrist joint angles.
Collapse
|
50
|
|