1
|
Rakovic M, Ferreira Duarte N, Marques J, Billard A, Santos-Victor J. The Gaze Dialogue Model: Nonverbal Communication in HHI and HRI. IEEE Trans Cybern 2024; 54:2026-2039. [PMID: 36446005 DOI: 10.1109/tcyb.2022.3222077] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
When humans interact with each other, eye gaze movements have to support motor control as well as communication. On the one hand, we need to fixate the task goal to retrieve visual information required for safe and precise action-execution. On the other hand, gaze movements fulfil the purpose of communication, both for reading the intention of our interaction partners, as well as to signal our action intentions to others. We study this Gaze Dialogue between two participants working on a collaborative task involving two types of actions: 1) individual action and 2) action-in-interaction. We recorded the eye-gaze data of both participants during the interaction sessions in order to build a computational model, the Gaze Dialogue, encoding the interplay of the eye movements during the dyadic interaction. The model also captures the correlation between the different gaze fixation points and the nature of the action. This knowledge is used to infer the type of action performed by an individual. We validated the model against the recorded eye-gaze behavior of one subject, taking the eye-gaze behavior of the other subject as the input. Finally, we used the model to design a humanoid robot controller that provides interpersonal gaze coordination in human-robot interaction scenarios. During the interaction, the robot is able to: 1) adequately infer the human action from gaze cues; 2) adjust its gaze fixation according to the human eye-gaze behavior; and 3) signal nonverbal cues that correlate with the robot's own action intentions.
Collapse
|
2
|
Gholami S, Manon A, Yao K, Billard A, Meling TR. An objective skill assessment framework for microsurgical anastomosis based on ALI scores. Acta Neurochir (Wien) 2024; 166:104. [PMID: 38400918 DOI: 10.1007/s00701-024-05934-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 12/01/2023] [Indexed: 02/26/2024]
Abstract
INTRODUCTION The current assessment and standardization of microsurgical skills are subjective, posing challenges in reliable skill evaluation. We aim to address these limitations by developing a quantitative and objective framework for accurately assessing and enhancing microsurgical anastomosis skills among surgical trainees. We hypothesize that this framework can differentiate the proficiency levels of microsurgeons, aligning with subjective assessments based on the ALI score. METHODS We select relevant performance metrics from the literature on laparoscopic skill assessment and human motor control studies, focusing on time, instrument kinematics, and tactile information. This information is measured and estimated by a set of sensors, including cameras, a motion capture system, and tactile sensors. The recorded data is analyzed offline using our proposed evaluation framework. Our study involves 12 participants of different ages ([Formula: see text] years) and genders (nine males and three females), including six novice and six intermediate subjects, who perform surgical anastomosis procedures on a chicken leg model. RESULTS We show that the proposed set of objective and quantitative metrics to assess skill proficiency aligns with subjective evaluations, particularly the ALI score method, and can effectively differentiate novices from more proficient microsurgeons. Furthermore, we find statistically significant disparities, where microsurgeons with intermediate level of skill proficiency surpassed novices in both task speed, reduced idle time, and smoother, briefer hand displacements. CONCLUSION The framework enables accurate skill assessment and provides objective feedback for improving microsurgical anastomosis skills among surgical trainees. By overcoming the subjectivity and limitations of current assessment methods, our approach contributes to the advancement of surgical education and the development of aspiring microsurgeons. Furthermore, our framework emerges to precisely distinguish and classify proficiency levels (novice and intermediate) exhibited by microsurgeons.
Collapse
Affiliation(s)
- Soheil Gholami
- Learning Algorithms and Systems Laboratory (LASA), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Anaëlle Manon
- Learning Algorithms and Systems Laboratory (LASA), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Kunpeng Yao
- Learning Algorithms and Systems Laboratory (LASA), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Aude Billard
- Learning Algorithms and Systems Laboratory (LASA), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Torstein R Meling
- Department of Neurosurgery, The National Hospital of Denmark, Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
3
|
Kostavelis I, Nalpantidis L, Detry R, Bruyninckx H, Billard A, Christian S, Bosch M, Andronikidis K, Lund-Nielsen H, Yosefipor P, Wajid U, Tomar R, Martínez FLL, Fugaroli F, Papargyriou D, Mehandjiev N, Bhullar G, Gonçalves E, Bentzen J, Essenbæk M, Cremona C, Wong M, Sanchez M, Giakoumis D, Tzovaras D. RoBétArmé Project: Human-robot collaborative construction system for shotcrete digitization and automation through advanced perception, cognition, mobility and additive manufacturing skills. Open Res Eur 2024; 4:4. [PMID: 38385118 PMCID: PMC10879757 DOI: 10.12688/openreseurope.16601.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 10/30/2023] [Indexed: 02/23/2024]
Abstract
The importance of construction automation has grown worldwide, aiming to deliver new machineries for the automation of roads, tunnels, bridges, buildings and earth-work construction. This need is mainly driven by (i) the shortage and rising costs of skilled workers, (ii) the tremendous increased needs for new infrastructures to serve the daily activities and (iii) the immense demand for maintenance of ageing infrastructure. Shotcrete (sprayed concrete) is increasingly becoming popular technology among contractors and builders, as its application is extremely economical and flexible as the growth in construction repairs in developed countries demand excessive automation of concrete placement. Even if shotcrete technology is heavily mechanized, the actual application is still performed manually at a large extend. RoBétArméEuropean project targets the Construction 4.0 transformation of the construction with shotcrete with the adoption of breakthrough technologies such as sensors, augmented reality systems, high-performance computing, additive manufacturing, advanced materials, autonomous robots and simulation systems, technologies that have already been studied and applied so far in Industry 4.0. The paper at hand showcases the development of a novel robotic system with advanced perception, cognition and digitization capabilities for the automation of all phases of shotcrete application. In particular, the challenges and barriers in shotcrete automation are presented and the RoBétArmésuggested solutions are outlined. We introduce a basic conceptual architecture of the system to be developed and we demonstrate the four application scenarios on which the system is designated to operate.
Collapse
Affiliation(s)
- Ioannis Kostavelis
- Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece, 57001, Greece
| | - Lazaros Nalpantidis
- Department of Electrical and Photonics Engineering, Technical University of Denmark, Copenhagen, Denmark
| | - Renaud Detry
- Department of Mechanical Engineering, Katholieke Universiteit Leuven, Leuven, Flanders, Belgium
| | - Herman Bruyninckx
- Department of Mechanical Engineering, Katholieke Universiteit Leuven, Leuven, Flanders, Belgium
| | - Aude Billard
- Ecole Polytechnique Federale de Lausanne, Lausanne, Vaud, Switzerland
| | - Schlette Christian
- Faculty of Engineering, University of Southern Denmark, Copenhagen, Denmark
| | - Marc Bosch
- Robotnik Automation S.L., Valencia, Spain
| | | | | | | | | | - Rahul Tomar
- DigitalTwin Technology GmbH, Cologne, Germany
| | | | | | | | | | | | - Estefânia Gonçalves
- MORE - Laboratorio Colaborative Motanhas De Investigacao Associacao, Bragança, Portugal
| | | | | | | | | | | | - Dimitrios Giakoumis
- Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece, 57001, Greece
| | - Dimitrios Tzovaras
- Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece, 57001, Greece
| |
Collapse
|
4
|
Iwane F, Billard A, Millán JDR. Inferring individual evaluation criteria for reaching trajectories with obstacle avoidance from EEG signals. Sci Rep 2023; 13:20163. [PMID: 37978205 PMCID: PMC10656489 DOI: 10.1038/s41598-023-47136-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 11/09/2023] [Indexed: 11/19/2023] Open
Abstract
During reaching actions, the human central nerve system (CNS) generates the trajectories that optimize effort and time. When there is an obstacle in the path, we make sure that our arm passes the obstacle with a sufficient margin. This comfort margin varies between individuals. When passing a fragile object, risk-averse individuals may adopt a larger margin by following the longer path than risk-prone people do. However, it is not known whether this variation is associated with a personalized cost function used for the individual optimal control policies and how it is represented in our brain activity. This study investigates whether such individual variations in evaluation criteria during reaching results from differentiated weighting given to energy minimization versus comfort, and monitors brain error-related potentials (ErrPs) evoked when subjects observe a robot moving dangerously close to a fragile object. Seventeen healthy participants monitored a robot performing safe, daring and unsafe trajectories around a wine glass. Each participant displayed distinct evaluation criteria on the energy efficiency and comfort of robot trajectories. The ErrP-BCI outputs successfully inferred such individual variation. This study suggests that ErrPs could be used in conjunction with an optimal control approach to identify the personalized cost used by CNS. It further opens new avenues for the use of brain-evoked potential to train assistive robotic devices through the use of neuroprosthetic interfaces.
Collapse
Affiliation(s)
- Fumiaki Iwane
- Learning Algorithms and Systems Laboratory (LASA), École Polytechnique Fédérale de Lausanne (EPFL), 1015, Lausanne, Switzerland.
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, 78712, USA.
- Department of Neurology, The University of Texas at Austin, Austin, TX, 78712, USA.
| | - Aude Billard
- Learning Algorithms and Systems Laboratory (LASA), École Polytechnique Fédérale de Lausanne (EPFL), 1015, Lausanne, Switzerland
| | - José Del R Millán
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, 78712, USA
- Department of Neurology, The University of Texas at Austin, Austin, TX, 78712, USA
- Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX, 78712, USA
- Mulva Clinic for the Neurosciences, The University of Texas at Austin, Austin, TX, 78712, USA
| |
Collapse
|
5
|
Yao K, Billard A. Exploiting Kinematic Redundancy for Robotic Grasping of Multiple Objects. IEEE T ROBOT 2023. [DOI: 10.1109/tro.2023.3253249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
|
6
|
Khadivar F, Mendez V, Correia C, Batzianoulis I, Billard A, Micera S. EMG-driven shared human-robot compliant control for in-hand object manipulation in hand prostheses. J Neural Eng 2022; 19. [PMID: 36384035 DOI: 10.1088/1741-2552/aca35f] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 11/16/2022] [Indexed: 11/17/2022]
Abstract
Objective. The limited functionality of hand prostheses remains one of the main reasons behind the lack of its wide adoption by amputees. Indeed, while commercial prostheses can perform a reasonable number of grasps, they are often inadequate for manipulating the object once in hand. This lack of dexterity drastically restricts the utility of prosthetic hands. We aim at investigating a novel shared control strategy that combines autonomous control of forces exerted by a robotic hand with electromyographic (EMG) decoding to perform robust in-hand object manipulation.Approach. We conduct a three-day long longitudinal study with eight healthy subjects controlling a 16-degrees-of-freedom robotic hand to insert objects in boxes of various orientations. EMG decoding from forearm muscles enables subjects to move, proportionally and simultaneously, the fingers of the robotic hand. The desired object rotation is inferred using two EMG electrodes placed on the shoulder that record the activity of muscles responsible for elevation and depression. During the object interaction phase, the autonomous controller stabilizes and rotates the object to achieve the desired pose. In this study, we compare an incremental and a proportional shoulder-decoding method in combination with two state machine interfaces offering different levels of assistance.Main results. Results indicate that robotic assistance reduces the number of failures by41%and, when combined with an incremental shoulder EMG decoding, leads to faster task completion time (median = 16.9 s), compared to other control conditions. Training to use the assistive device is fast. After one session of practice, all subjects managed to achieve tasks with50%less failures.Significance. Shared control approaches that give some authority to an autonomous controller on-board the prosthesis are an alternative to control schemes relying on EMG decoding alone. This may improve the dexterity and versatility of robotic prosthetic hands for people with trans-radial amputation. By delegating control of forces to the prosthesis' on-board control, one speeds up reaction time and improves the precision of force control. Such a shared control mechanism may enable amputees to perform fine insertion tasks solely using their prosthetic hands. This may restore some of the functionality of the disabled arm.
Collapse
Affiliation(s)
- Farshad Khadivar
- LASA Laboratory, École Polytechnique Fédérale de Lausanne, 1015 Laussane, Switzerland
| | - Vincent Mendez
- Neuro X Institute, École Polytechnique Fédérale de Lausanne, 1202 Genève, Switzerland
| | - Carolina Correia
- Former member of LASA Laboratory, École Polytechnique Fédérale de Lausanne, 1015 Laussane, Switzerland
| | - Iason Batzianoulis
- Former member of LASA Laboratory, École Polytechnique Fédérale de Lausanne, 1015 Laussane, Switzerland
| | - Aude Billard
- LASA Laboratory, École Polytechnique Fédérale de Lausanne, 1015 Laussane, Switzerland
| | - Silvestro Micera
- Neuro X Institute, École Polytechnique Fédérale de Lausanne, 1202 Genève, Switzerland.,BioRobotics Institute and Department of Excellence in Robotics and AI, 56127 Pisa, Italy
| |
Collapse
|
7
|
Gonon DJ, Paez-Granados D, Billard A. Robots' Motion Planning in Human Crowds by Acceleration Obstacles. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3199818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
| | - Diego Paez-Granados
- Spinal Cord Injury Artificial Intelligence - SCAI - Lab at SPZ, ETH Zurich, Zürich, Switzerland
| | - Aude Billard
- School of Engineering, EPFL, Lausanne, Switzerland
| |
Collapse
|
8
|
Paez-Granados D, Billard A. Crash test-based assessment of injury risks for adults and children when colliding with personal mobility devices and service robots. Sci Rep 2022; 12:5285. [PMID: 35347216 PMCID: PMC8960768 DOI: 10.1038/s41598-022-09349-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Accepted: 03/22/2022] [Indexed: 11/09/2022] Open
Abstract
Autonomous mobility devices such as transport, cleaning, and delivery robots, hold a massive economic and social benefit. However, their deployment should not endanger bystanders, particularly vulnerable populations such as children and older adults who are inherently smaller and fragile. This study compared the risks faced by different pedestrian categories and determined risks through crash testing involving a service robot hitting an adult and a child dummy. Results of collisions at 3.1 m/s (11.1 km/h/6.9 mph) showed risks of serious head (14%), neck (20%), and chest (50%) injuries in children, and tibia fracture (33%) in adults. Furthermore, secondary impact analysis resulted in both populations at risk of severe head injuries, namely, from falling to the ground. Our data and simulations show mitigation strategies for reducing impact injury risks below 5% by either lowering the differential speed at impact below 1.5 m/s (5.4 km/h/3.3 mph) or through the usage of absorbent materials. The results presented herein may influence the design of controllers, sensing awareness, and assessment methods for robots and small vehicles standardization, as well as, policymaking and regulations for the speed, design, and usage of these devices in populated areas.
Collapse
Affiliation(s)
- Diego Paez-Granados
- Swiss Federal Institute of Technology in Lausanne, EPFL, Institutes of Microengineering and Mechanical Engineering, 1015, Lausanne, Switzerland.
| | - Aude Billard
- Swiss Federal Institute of Technology in Lausanne, EPFL, Institutes of Microengineering and Mechanical Engineering, 1015, Lausanne, Switzerland
| |
Collapse
|
9
|
Duarte NF, Billard A, Santos-Victor J. The Role of Object Physical Properties in Human Handover Actions: Applications in Robotics. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2022.3222088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Nuno F. Duarte
- Vislab, Institute for Systems and Robotics, Instituto Superior Técnico, Universidade de Lisboa, Portugal
| | - Aude Billard
- LASA, Swiss Federal Institute of Technology, Lausanne, Switzerland
| | - Jose Santos-Victor
- Vislab, Institute for Systems and Robotics, Instituto Superior Técnico, Universidade de Lisboa, Portugal
| |
Collapse
|
10
|
Koptev M, Figueroa N, Billard A. Neural Joint Space Implicit Signed Distance Functions for Reactive Robot Manipulator Control. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3227860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Affiliation(s)
- Mikhail Koptev
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | | | - Aude Billard
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
11
|
Huber L, Slotine JJ, Billard A. Fast Obstacle Avoidance Based on Real-Time Sensing. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3232271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Affiliation(s)
- Lukas Huber
- LASA Laboratory, Swiss Federal School of Technology in Lausanne - EPFL, Switzerland
| | | | - Aude Billard
- LASA Laboratory, Swiss Federal School of Technology in Lausanne - EPFL, Switzerland
| |
Collapse
|
12
|
Huber L, Slotine JJ, Billard A. Avoiding Dense and Dynamic Obstacles in Enclosed Spaces: Application to Moving in Crowds. IEEE T ROBOT 2022. [DOI: 10.1109/tro.2022.3164789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Lukas Huber
- LASA Laboratory, Swiss Federal School of Technology in Lausanne - EPFL CH-1015, Lausanne Switzerland
| | - Jean-Jacques Slotine
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Aude Billard
- LASA Laboratory, Swiss Federal School of Technology in Lausanne - EPFL CH-1015, Lausanne Switzerland
| |
Collapse
|
13
|
Carfì A, Patten T, Kuang Y, Hammoud A, Alameh M, Maiettini E, Weinberg AI, Faria D, Mastrogiovanni F, Alenyà G, Natale L, Perdereau V, Vincze M, Billard A. Hand-Object Interaction: From Human Demonstrations to Robot Manipulation. Front Robot AI 2021; 8:714023. [PMID: 34660702 PMCID: PMC8517111 DOI: 10.3389/frobt.2021.714023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 09/14/2021] [Indexed: 11/13/2022] Open
Abstract
Human-object interaction is of great relevance for robots to operate in human environments. However, state-of-the-art robotic hands are far from replicating humans skills. It is, therefore, essential to study how humans use their hands to develop similar robotic capabilities. This article presents a deep dive into hand-object interaction and human demonstrations, highlighting the main challenges in this research area and suggesting desirable future developments. To this extent, the article presents a general definition of the hand-object interaction problem together with a concise review for each of the main subproblems involved, namely: sensing, perception, and learning. Furthermore, the article discusses the interplay between these subproblems and describes how their interaction in learning from demonstration contributes to the success of robot manipulation. In this way, the article provides a broad overview of the interdisciplinary approaches necessary for a robotic system to learn new manipulation skills by observing human behavior in the real world.
Collapse
Affiliation(s)
- Alessandro Carfì
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Timothy Patten
- Vision for Robotics Laboratory, Institut für Automatisierungs- und Regelungstechnik, Technische Universität Wien, Vienna, Austria
| | - Yingyi Kuang
- Robotics, Vision and Intelligent Systems, College of Engineering and Physical Sciences, Aston University, Birmingham, United Kingdom
| | - Ali Hammoud
- Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, Paris, France
| | - Mohamad Alameh
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Elisa Maiettini
- Humanoid Sensing and Perception, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Abraham Itzhak Weinberg
- Robotics, Vision and Intelligent Systems, College of Engineering and Physical Sciences, Aston University, Birmingham, United Kingdom
| | - Diego Faria
- Robotics, Vision and Intelligent Systems, College of Engineering and Physical Sciences, Aston University, Birmingham, United Kingdom
| | - Fulvio Mastrogiovanni
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Guillem Alenyà
- Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Barcelona, Spain
| | - Lorenzo Natale
- Humanoid Sensing and Perception, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Véronique Perdereau
- Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, Paris, France
| | - Markus Vincze
- Vision for Robotics Laboratory, Institut für Automatisierungs- und Regelungstechnik, Technische Universität Wien, Vienna, Austria
| | - Aude Billard
- Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
14
|
Abstract
Since 2014, a specific standard has been dedicated for the safety certification of personal care robots, which operate in close proximity to humans. These robots serve as information providers, object transporters, personal mobility carriers, and security patrollers. In this article, we point out the shortcomings concerning EN ISO 13482:2014, which encompasses guidelines regarding the safety and design of personal care robots. In particular, we argue that the current standard is not suitable for guaranteeing people's safety when these robots operate in public spaces. Specifically, the standard lacks requirements to protect pedestrians and bystanders. The guideline implicitly assumes that private spaces, such as households and offices, present the same hazards as in public spaces. We highlight the existence of at least three properties pertaining to robots’ use in public spaces. These properties include (1) crowds, (2) social norms and proxemics rules, and (3) people's misbehaviours. We discuss how these properties impact robots’ safety. This article aims to raise stakeholders’ awareness on individuals’ safety when robots are deployed in public spaces. This could be achieved by integrating the gaps present in EN ISO 13482:2014 or by creating a new dedicated standard.
Collapse
Affiliation(s)
- Pericle Salvini
- LASA Laboratory, School of Engineering, EPFL Station 9, 1015 Lausanne, Switzerland
| | - Diego Paez-Granados
- LASA Laboratory, School of Engineering, EPFL Station 9, 1015 Lausanne, Switzerland
| | - Aude Billard
- LASA Laboratory, School of Engineering, EPFL Station 9, 1015 Lausanne, Switzerland
| |
Collapse
|
15
|
Moon A, Hashmi M, Loos HFMVD, Croft EA, Billard A. Design of Hesitation Gestures for Nonverbal Human-Robot Negotiation of Conflicts. J Hum -Robot Interact 2021. [DOI: 10.1145/3418302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
When the question of who should get access to a communal resource first is uncertain, people often negotiate via nonverbal communication to resolve the conflict. What should a robot be programmed to do when such conflicts arise in Human-Robot Interaction? The answer to this question varies depending on the context of the situation. Learning from how humans use hesitation gestures to negotiate a solution in such conflict situations, we present a human-inspired design of nonverbal hesitation gestures that can be used for Human-Robot Negotiation. We extracted characteristic features of such negotiative hesitations humans use, and subsequently designed a trajectory generator (Negotiative Hesitation Generator) that can re-create the features in robot responses to conflicts. Our human-subjects experiment demonstrates the efficacy of the designed robot behaviour against non-negotiative stopping behaviour of a robot. With positive results from our human-robot interaction experiment, we provide a validated trajectory generator with which one can explore the dynamics of human-robot nonverbal negotiation of resource conflicts.
Collapse
Affiliation(s)
| | | | | | | | - Aude Billard
- Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| |
Collapse
|
16
|
|
17
|
Abstract
Many daily tasks involve the collaboration of both hands. Humans dexterously adjust hand poses and modulate the forces exerted by fingers in response to task demands. Hand pose selection has been intensively studied in unimanual tasks, but little work has investigated bimanual tasks. This work examines hand poses selection in a bimanual high-precision-screwing task taken from watchmaking. Twenty right-handed subjects dismounted a screw on the watch face with a screwdriver in two conditions. Results showed that although subjects used similar hand poses across steps within the same experimental conditions, the hand poses differed significantly in the two conditions. In the free-base condition, subjects needed to stabilize the watch face on the table. The role distribution across hands was strongly influenced by hand dominance: the dominant hand manipulated the tool, whereas the nondominant hand controlled the additional degrees of freedom that might impair performance. In contrast, in the fixed-base condition, the watch face was stationary. Subjects used both hands even though single hand would have been sufficient. Importantly, hand poses decoupled the control of task-demanded force and torque across hands through virtual fingers that grouped multiple fingers into functional units. This preference for bimanual over unimanual control strategy could be an effort to reduce variability caused by mechanical couplings and to alleviate intrinsic sensorimotor processing burdens. To afford analysis of this variety of observations, a novel graphical matrix-based representation of the distribution of hand pose combinations was developed. Atypical hand poses that are not documented in extant hand taxonomies are also included.NEW & NOTEWORTHY We study hand poses selection in bimanual fine motor skills. To understand how roles and control variables are distributed across the hands and fingers, we compared two conditions when unscrewing a screw from a watch face. When the watch face needed positioning, role distribution was strongly influenced by hand dominance; when the watch face was stationary, a variety of hand pose combinations emerged. Control of independent task demands is distributed either across hands or across distinct groups of fingers.
Collapse
Affiliation(s)
- Kunpeng Yao
- 1Learning Algorithms and Systems Laboratory, School of Engineering,
grid.5333.6École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Dagmar Sternad
- 2Department of Biology, Northeastern University, Boston, Massachusetts,3Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts,4Department of Physics, Northeastern University, Boston, Massachusetts
| | - Aude Billard
- 1Learning Algorithms and Systems Laboratory, School of Engineering,
grid.5333.6École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
18
|
Amanhoud W, Hernandez Sanchez J, Bouri M, Billard A. Contact-initiated shared control strategies for four-arm supernumerary manipulation with foot interfaces. Int J Rob Res 2021. [DOI: 10.1177/02783649211017642] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In industrial or surgical settings, to achieve many tasks successfully, at least two people are needed. To this end, robotic assistance could be used to enable a single person to perform such tasks alone, with the help of robots through direct, shared, or autonomous control. We are interested in four-arm manipulation scenarios, where both feet are used to control two robotic arms via bi-pedal haptic interfaces. The robotic arms complement the tasks of the biological arms, for instance, in supporting and moving an object while working on it (using both hands). To reduce fatigue, cognitive workload, and to ease the execution of the foot manipulation, we propose two types of assistance that can be enabled upon contact with the object (i.e., based on the interaction forces): autonomous-contact force generation and auto-coordination of the robotic arms. The latter relates to controlling both arms with a single foot, once the object is grasped. We designed four (shared) control strategies that are derived from the combinations (absence/presence) of both assistance modalities, and we compared them through a user study (with 12 participants) on a four-arm manipulation task. The results show that force assistance positively improves human–robot fluency in the four-arm task, the ease of use and usefulness; it also reduces the fatigue. Finally, to make the dual-assistance approach the preferred and most successful among the proposed control strategies, delegating the grasping force to the robotic arms is a crucial factor when controlling them both with a single foot.
Collapse
Affiliation(s)
- Walid Amanhoud
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
| | - Jacob Hernandez Sanchez
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
- Biorobotics Laboratory (BIOROB), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
| | - Mohamed Bouri
- Biorobotics Laboratory (BIOROB), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
- Translational Neural Engineering Laboratory (TNE), Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Aude Billard
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal School of Technology in Lausanne EPFL, Lausanne, Switzerland
| |
Collapse
|
19
|
Abstract
AbstractThe slogan “robots will pervade our environment” has become a reality. Drones and ground robots are used for commercial purposes while semi-autonomous driving systems are standard accessories to traditional cars. However, while our eyes have been riveted on dangers and accidents arising from drones falling and autonomous cars’ crashing, much less attention has been ported to dangers arising from the imminent arrival of robots that share the floor with pedestrians and will mix with human crowds. These robots range from semi or autonomous mobile platforms designed for providing several kinds of service, such as assistant, patrolling, tour-guide, delivery, human transportation, etc. We highlight and discuss potential sources of injury emerging from contacts of robots with pedestrians through a set of case studies. We look specifically at dangers deriving from robots moving in dense crowds. In such situations, contact will not only be unavoidable, but may be desirable to ensure that the robot moves with the flow. As an outlook toward the future, we also offer some thoughts on the psychological risks, beyond the physical hazards, arising from the robot’s appearance and behaviour. We also advocate for new policies to regulate mobile robots traffic and enforce proper end user’s training.
Collapse
|
20
|
|
21
|
Abstract
AbstractA seamless interaction requires two robotic behaviors: the leader role where the robot rejects the external perturbations and focuses on the autonomous execution of the task, and the follower role where the robot ignores the task and complies with human intentional forces. The goal of this work is to provide (1) a unified robotic architecture to produce these two roles, and (2) a human-guidance detection algorithm to switch across the two roles. In the absence of human-guidance, the robot performs its task autonomously and upon detection of such guidances the robot passively follows the human motions. We employ dynamical systems to generate task-specific motion and admittance control to generate reactive motions toward the human-guidance. This structure enables the robot to reject undesirable perturbations, track the motions precisely, react to human-guidance by providing proper compliant behavior, and re-plan the motion reactively. We provide analytical investigation of our method in terms of tracking and compliant behavior. Finally, we evaluate our method experimentally using a 6-DoF manipulator.
Collapse
|
22
|
Krausz NE, Lamotte D, Batzianoulis I, Hargrove LJ, Micera S, Billard A. Intent Prediction Based on Biomechanical Coordination of EMG and Vision-Filtered Gaze for End-Point Control of an Arm Prosthesis. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1471-1480. [PMID: 32386160 DOI: 10.1109/tnsre.2020.2992885] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
We propose a novel controller for powered prosthetic arms, where fused EMG and gaze data predict the desired end-point for a full arm prosthesis, which could drive the forward motion of individual joints. We recorded EMG, gaze, and motion-tracking during pick-and-place trials with 7 able-bodied subjects. Subjects positioned an object above a random target on a virtual interface, each completing around 600 trials. On average across all trials and subjects gaze preceded EMG and followed a repeatable pattern that allowed for prediction. A computer vision algorithm was used to extract the initial and target fixations and estimate the target position in 2D space. Two SVRs were trained with EMG data to predict the x- and y- position of the hand; results showed that the y-estimate was significantly better than the x-estimate. The EMG and gaze predictions were fused using a Kalman Filter-based approach, and the positional error from using EMG-only was significantly higher than the fusion of EMG and gaze. The final target position Root Mean Squared Error (RMSE) decreased from 9.28 cm with an EMG-only prediction to 6.94 cm when using a gaze-EMG fusion. This error also increased significantly when removing some or all arm muscle signals. However, using fused EMG and gaze, there were no significant difference between predictors that included all muscles, or only a subset of muscles.
Collapse
|
23
|
Abstract
Dexterous manipulation is one of the primary goals in robotics. Robots with this capability could sort and package objects, chop vegetables, and fold clothes. As robots come to work side by side with humans, they must also become human-aware. Over the past decade, research has made strides toward these goals. Progress has come from advances in visual and haptic perception and in mechanics in the form of soft actuators that offer a natural compliance. Most notably, immense progress in machine learning has been leveraged to encapsulate models of uncertainty and to support improvements in adaptive and robust control. Open questions remain in terms of how to enable robots to deal with the most unpredictable agent of all, the human.
Collapse
Affiliation(s)
- Aude Billard
- Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Danica Kragic
- Robotics, Perception and Learning (RPL), EECS, Royal Institute for Technology (KTH), Stockholm, Sweden
| |
Collapse
|
24
|
Sanchez-Matilla R, Chatzilygeroudis K, Modas A, Duarte NF, Xompero A, Frossard P, Billard A, Cavallaro A. Benchmark for Human-to-Robot Handovers of Unseen Containers With Unknown Filling. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2969200] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
25
|
Chatzilygeroudis K, Fichera B, Lauzana I, Bu F, Yao K, Khadivar F, Billard A. Benchmark for Bimanual Robotic Manipulation of Semi-Deformable Objects. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2972837] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
26
|
Yao K, Billard A. An inverse optimization approach to understand human acquisition of kinematic coordination in bimanual fine manipulation tasks. Biol Cybern 2020; 114:63-82. [PMID: 31907609 PMCID: PMC7062861 DOI: 10.1007/s00422-019-00814-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Accepted: 12/19/2019] [Indexed: 06/10/2023]
Abstract
Tasks that require the cooperation of both hands and arms are common in human everyday life. Coordination helps to synchronize in space and temporally motion of the upper limbs. In fine bimanual tasks, coordination enables also to achieve higher degrees of precision that could be obtained from a single hand. We studied the acquisition of bimanual fine manipulation skills in watchmaking tasks, which require assembly of pieces at millimeter scale. It demands years of training. We contrasted motion kinematics performed by novice apprentices to those of professionals. Fifteen subjects, ten novices and five experts, participated in the study. We recorded force applied on the watch face and kinematics of fingers and arms. Results indicate that expert subjects wisely place their fingers on the tools to achieve higher dexterity. Compared to novices, experts also tend to align task-demanded force application with the optimal force transmission direction of the dominant arm. To understand the cognitive processes underpinning the different coordination patterns across experts and novice subjects, we followed the optimal control theoretical framework and hypothesize that the difference in task performances is caused by changes in the central nervous system's optimal criteria. We formulated kinematic metrics to evaluate the coordination patterns and exploit inverse optimization approach to infer the optimal criteria. We interpret the human acquisition of novel coordination patterns as an alteration in the composition structure of the central nervous system's optimal criteria accompanied by the learning process.
Collapse
Affiliation(s)
- Kunpeng Yao
- Learning Algorithms and Systems Laboratory (LASA), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Aude Billard
- Learning Algorithms and Systems Laboratory (LASA), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
27
|
Barra B, Badi M, Perich MG, Conti S, Mirrazavi Salehian SS, Moreillon F, Bogaard A, Wurth S, Kaeser M, Passeraub P, Milekovic T, Billard A, Micera S, Capogrosso M. A versatile robotic platform for the design of natural, three-dimensional reaching and grasping tasks in monkeys. J Neural Eng 2019; 17:016004. [PMID: 31597123 DOI: 10.1088/1741-2552/ab4c77] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE Translational studies on motor control and neurological disorders require detailed monitoring of sensorimotor components of natural limb movements in relevant animal models. However, available experimental tools do not provide a sufficiently rich repertoire of behavioral signals. Here, we developed a robotic platform that enables the monitoring of kinematics, interaction forces, and neurophysiological signals during user-defined upper limb tasks for monkeys. APPROACH We configured the platform to position instrumented objects in a three-dimensional workspace and provide an interactive dynamic force-field. MAIN RESULTS We show the relevance of our platform for fundamental and translational studies with three example applications. First, we study the kinematics of natural grasp in response to variable interaction forces. We then show simultaneous and independent encoding of kinematic and forces in single unit intra-cortical recordings from sensorimotor cortical areas. Lastly, we demonstrate the relevance of our platform to develop clinically relevant brain computer interfaces in a kinematically unconstrained motor task. SIGNIFICANCE Our versatile control structure does not depend on the specific robotic arm used and allows for the design and implementation of a variety of tasks that can support both fundamental and translational studies of motor control.
Collapse
Affiliation(s)
- B Barra
- Department of Neuroscience and Movement Science, Platform of Translational Neurosciences, University of Fribourg, Fribourg, Switzerland. Co-first authors
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
28
|
|
29
|
|
30
|
Zhakypov Z, Heremans F, Billard A, Paik J. An Origami-Inspired Reconfigurable Suction Gripper for Picking Objects With Variable Shape and Size. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2847403] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
31
|
Cohen L, Billard A. Social babbling: The emergence of symbolic gestures and words. Neural Netw 2018; 106:194-204. [PMID: 30081346 DOI: 10.1016/j.neunet.2018.06.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 06/20/2018] [Accepted: 06/27/2018] [Indexed: 10/28/2022]
Abstract
Language acquisition theories classically distinguish passive language understanding from active language production. However, recent findings show that brain areas such as Broca's region are shared in language understanding and production. Furthermore, these areas are also implicated in understanding and producing goal-oriented actions. These observations question the passive view of language development. In this work, we propose a cognitive developmental model of symbol acquisition, coherent with an active view of language learning. For that purpose, we introduce the concept of social babbling. In this view, symbols are learned in the same way as goal-oriented actions in the context of specific caregiver-infant interactions. We show that this model allows a virtual agent to learn both symbolic words and gestures to refer to objects while interacting with a caregiver. We validate our model by reproducing results from studies on the influence of parental responsiveness on infants language acquisition.
Collapse
Affiliation(s)
- Laura Cohen
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland.
| | - Aude Billard
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| |
Collapse
|
32
|
Duarte NF, Rakovic M, Tasevski J, Coco MI, Billard A, Santos-Victor J. Action Anticipation: Reading the Intentions of Humans and Robots. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2861569] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
33
|
Salehian SSM, Billard A. A Dynamical-System-Based Approach for Controlling Robotic Manipulators During Noncontact/Contact Transitions. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2833142] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
34
|
Raffard S, Bortolon C, Cohen L, Khoramshahi M, Salesse RN, Billard A, Capdevielle D. Does this robot have a mind? Schizophrenia patients' mind perception toward humanoid robots. Schizophr Res 2018; 197:585-586. [PMID: 29203055 DOI: 10.1016/j.schres.2017.11.034] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/16/2017] [Revised: 11/26/2017] [Accepted: 11/27/2017] [Indexed: 11/18/2022]
Affiliation(s)
- Stéphane Raffard
- University Paul Valéry Montpellier 3, University Montpellier, Montpellier, EPSYLON EA 4556, F34000, France; University Department of Adult Psychiatry, Hôpital de la Colombière, CHRU Montpellier, Montpellier University, Montpellier, France.
| | - Catherine Bortolon
- University Paul Valéry Montpellier 3, University Montpellier, Montpellier, EPSYLON EA 4556, F34000, France; University Department of Adult Psychiatry, Hôpital de la Colombière, CHRU Montpellier, Montpellier University, Montpellier, France
| | - Laura Cohen
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | - Mahdi Khoramshahi
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | - Robin N Salesse
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, 34090 Montpellier, France
| | - Aude Billard
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | - Delphine Capdevielle
- University Department of Adult Psychiatry, Hôpital de la Colombière, CHRU Montpellier, Montpellier University, Montpellier, France; INSERM U-1061, Montpellier, France
| |
Collapse
|
35
|
Shavit Y, Figueroa N, Salehian SSM, Billard A. Learning Augmented Joint-Space Task-Oriented Dynamical Systems: A Linear Parameter Varying and Synergetic Control Approach. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2833497] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
36
|
Batzianoulis I, Krausz NE, Simon AM, Hargrove L, Billard A. Decoding the grasping intention from electromyography during reaching motions. J Neuroeng Rehabil 2018; 15:57. [PMID: 29940991 PMCID: PMC6020187 DOI: 10.1186/s12984-018-0396-5] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2017] [Accepted: 06/11/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Active upper-limb prostheses are used to restore important hand functionalities, such as grasping. In conventional approaches, a pattern recognition system is trained over a number of static grasping gestures. However, training a classifier in a static position results in lower classification accuracy when performing dynamic motions, such as reach-to-grasp. We propose an electromyography-based learning approach that decodes the grasping intention during the reaching motion, leading to a faster and more natural response of the prosthesis. METHODS AND RESULTS Eight able-bodied subjects and four individuals with transradial amputation gave informed consent and participated in our study. All the subjects performed reach-to-grasp motions for five grasp types, while the elecromyographic (EMG) activity and the extension of the arm were recorded. We separated the reach-to-grasp motion into three phases, with respect to the extension of the arm. A multivariate analysis of variance (MANOVA) on the muscular activity revealed significant differences among the motion phases. Additionally, we examined the classification performance on these phases. We compared the performance of three different pattern recognition methods; Linear Discriminant Analysis (LDA), Support Vector Machines (SVM) with linear and non-linear kernels, and an Echo State Network (ESN) approach. Our off-line analysis shows that it is possible to have high classification performance above 80% before the end of the motion when with three-grasp types. An on-line evaluation with an upper-limb prosthesis shows that the inclusion of the reaching motion in the training of the classifier importantly improves classification accuracy and enables the detection of grasp intention early in the reaching motion. CONCLUSIONS This method offers a more natural and intuitive control of prosthetic devices, as it will enable controlling grasp closure in synergy with the reaching motion. This work contributes to the decrease of delays between the user's intention and the device response and improves the coordination of the device with the motion of the arm.
Collapse
Affiliation(s)
- Iason Batzianoulis
- Learning Algorithms and Systems Laboratory (LASA), School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Route Cantonale, Lausanne, CH-1015 Switzerland
| | - Nili E. Krausz
- Center for Bionic Medicine, Shirley Ryan AbilityLab, E Erie St., Chicago, 60611 IL USA
- Dept. of Physical Medicine and Rehabilitation, Northwestern University, N Lake Shore, Chicago, 60611 IL USA
| | - Ann M. Simon
- Center for Bionic Medicine, Shirley Ryan AbilityLab, E Erie St., Chicago, 60611 IL USA
- Dept. of Physical Medicine and Rehabilitation, Northwestern University, N Lake Shore, Chicago, 60611 IL USA
| | - Levi Hargrove
- Center for Bionic Medicine, Shirley Ryan AbilityLab, E Erie St., Chicago, 60611 IL USA
- Dept. of Physical Medicine and Rehabilitation, Northwestern University, N Lake Shore, Chicago, 60611 IL USA
- Dept. of Biomedical Engineering, Northwestern University, Evanston, 60208 IL USA
| | - Aude Billard
- Learning Algorithms and Systems Laboratory (LASA), School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Route Cantonale, Lausanne, CH-1015 Switzerland
| |
Collapse
|
37
|
Abstract
Coordination is essential in the design of dynamic control strategies for multi-arm robotic systems. Given the complexity of the task and dexterity of the system, coordination constraints can emerge from different levels of planning and control. Primarily, one must consider task-space coordination, where the robots must coordinate with each other, with an object or with a target of interest. Coordination is also necessary in joint space, as the robots should avoid self-collisions at any time. We provide such joint-space coordination by introducing a centralized inverse kinematics (IK) solver under self-collision avoidance constraints, formulated as a quadratic program and solved in real-time. The space of free motion is modeled through a sparse non-linear kernel classification method in a data-driven learning approach. Moreover, we provide multi-arm task-space coordination for both synchronous or asynchronous behaviors. We define a synchronous behavior as that in which the robot arms must coordinate with each other and with a moving object such that they reach for it in synchrony. In contrast, an asynchronous behavior allows for each robot to perform independent point-to-point reaching motions. To transition smoothly from asynchronous to synchronous behaviors and vice versa, we introduce the notion of synchronization allocation. We show how this allocation can be controlled through an external variable, such as the location of the object to be manipulated. Both behaviors and their synchronization allocation are encoded in a single dynamical system. We validate our framework on a dual-arm robotic system and demonstrate that the robots can re-synchronize and adapt the motion of each arm while avoiding self-collision within milliseconds. The speed of control is exploited to intercept fast moving objects whose motion cannot be predicted accurately.
Collapse
Affiliation(s)
- Seyed Sina Mirrazavi Salehian
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal Institute of Technology, Lausanne (EPFL), Lausanne, Switzerland
| | - Nadia Figueroa
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal Institute of Technology, Lausanne (EPFL), Lausanne, Switzerland
| | - Aude Billard
- Learning Algorithms and Systems Laboratory (LASA), Swiss Federal Institute of Technology, Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
38
|
Cohen L, Khoramshahi M, Salesse RN, Bortolon C, Słowiński P, Zhai C, Tsaneva-Atanasova K, Di Bernardo M, Capdevielle D, Marin L, Schmidt RC, Bardy BG, Billard A, Raffard S. Influence of facial feedback during a cooperative human-robot task in schizophrenia. Sci Rep 2017; 7:15023. [PMID: 29101325 PMCID: PMC5670132 DOI: 10.1038/s41598-017-14773-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Accepted: 10/05/2017] [Indexed: 01/28/2023] Open
Abstract
Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.
Collapse
Affiliation(s)
- Laura Cohen
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | - Mahdi Khoramshahi
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | | | - Catherine Bortolon
- University Department of Adult Psychiatry, CHU, Montpellier, France
- Laboratory Epsylon, EA 4556, University Montpellier 3 Paul Valery, Montpellier, France
| | - Piotr Słowiński
- Department of Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, United Kingdom
| | - Chao Zhai
- Department of Engineering Mathematics, University of Bristol, Bristol, United Kingdom
| | - Krasimira Tsaneva-Atanasova
- Department of Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, United Kingdom
| | - Mario Di Bernardo
- Department of Engineering Mathematics, University of Bristol, Bristol, United Kingdom
| | | | - Ludovic Marin
- EuroMov, Montpellier University, Montpellier, France
| | - Richard C Schmidt
- Psychology Department, College of the Holy Cross, Worcester, MA, USA
| | - Benoit G Bardy
- EuroMov, Montpellier University, Montpellier, France
- Institut Universitaire de France, Paris, France
| | - Aude Billard
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | - Stéphane Raffard
- University Department of Adult Psychiatry, CHU, Montpellier, France.
- Laboratory Epsylon, EA 4556, University Montpellier 3 Paul Valery, Montpellier, France.
| |
Collapse
|
39
|
de Chambrier G, Billard A. Non-Parametric Bayesian State Space Estimator for Negative Information. Front Robot AI 2017. [DOI: 10.3389/frobt.2017.00040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
40
|
|
41
|
Rey J, Kronander K, Farshidian F, Buchli J, Billard A. Learning motions from demonstrations and rewards with time-invariant dynamical systems based policies. Auton Robots 2017. [DOI: 10.1007/s10514-017-9636-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
42
|
Słowiński P, Alderisio F, Zhai C, Shen Y, Tino P, Bortolon C, Capdevielle D, Cohen L, Khoramshahi M, Billard A, Salesse R, Gueugnon M, Marin L, Bardy BG, di Bernardo M, Raffard S, Tsaneva-Atanasova K. Unravelling socio-motor biomarkers in schizophrenia. NPJ Schizophr 2017; 3:8. [PMID: 28560254 PMCID: PMC5441525 DOI: 10.1038/s41537-016-0009-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2016] [Revised: 12/06/2016] [Accepted: 12/15/2016] [Indexed: 12/24/2022]
Abstract
We present novel, low-cost and non-invasive potential diagnostic biomarkers of schizophrenia. They are based on the 'mirror-game', a coordination task in which two partners are asked to mimic each other's hand movements. In particular, we use the patient's solo movement, recorded in the absence of a partner, and motion recorded during interaction with an artificial agent, a computer avatar or a humanoid robot. In order to discriminate between the patients and controls, we employ statistical learning techniques, which we apply to nonverbal synchrony and neuromotor features derived from the participants' movement data. The proposed classifier has 93% accuracy and 100% specificity. Our results provide evidence that statistical learning techniques, nonverbal movement coordination and neuromotor characteristics could form the foundation of decision support tools aiding clinicians in cases of diagnostic uncertainty.
Collapse
Affiliation(s)
- Piotr Słowiński
- Department of Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, EX4 4QF UK
| | - Francesco Alderisio
- Department of Engineering Mathematics, University of Bristol, Merchant Venturers’ Building, Exeter, BS8 1UB UK
| | - Chao Zhai
- Department of Engineering Mathematics, University of Bristol, Merchant Venturers’ Building, Exeter, BS8 1UB UK
| | - Yuan Shen
- School of Computer Science, University of Birmingham, Edgbaston, Birmingham, B15 2TT UK
| | - Peter Tino
- School of Computer Science, University of Birmingham, Edgbaston, Birmingham, B15 2TT UK
| | - Catherine Bortolon
- University Department of Adult Psychiatry, Hôpital de la Colombière, CHU Montpellier, Montpellier-1 University, Montpellier, France
| | - Delphine Capdevielle
- University Department of Adult Psychiatry, Hôpital de la Colombière, CHU Montpellier, Montpellier-1 University, Montpellier, France
- INSERM U-1061, Montpellier, France
| | - Laura Cohen
- LASA Laboratory, School of Engineering, Ecole Polytechnique Federale de Lausanne—EPFL, Station 9, Lausanne, 1015 Switzerland
| | - Mahdi Khoramshahi
- LASA Laboratory, School of Engineering, Ecole Polytechnique Federale de Lausanne—EPFL, Station 9, Lausanne, 1015 Switzerland
| | - Aude Billard
- LASA Laboratory, School of Engineering, Ecole Polytechnique Federale de Lausanne—EPFL, Station 9, Lausanne, 1015 Switzerland
| | - Robin Salesse
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, Montpellier, 34090 France
| | - Mathieu Gueugnon
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, Montpellier, 34090 France
| | - Ludovic Marin
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, Montpellier, 34090 France
| | - Benoit G. Bardy
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, Montpellier, 34090 France
- Institut Universitaire de France, Paris, France
| | - Mario di Bernardo
- Department of Engineering Mathematics, University of Bristol, Merchant Venturers’ Building, Exeter, BS8 1UB UK
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, 80125 Italy
| | - Stephane Raffard
- University Department of Adult Psychiatry, Hôpital de la Colombière, CHU Montpellier, Montpellier-1 University, Montpellier, France
- Epsylon Laboratory Dynamic of Human Abilities & Health Behaviors, Montpellier-3 University, Montpellier, France
| | - Krasimira Tsaneva-Atanasova
- Department of Mathematics, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, EX4 4QF UK
- EPSRC Centre for Predictive Modelling in Healthcare, University of Exeter, Exeter, EX4 4QJ UK
| |
Collapse
|
43
|
Erden MS, Billard A. Robotic Assistance by Impedance Compensation for Hand Movements While Manual Welding. IEEE Trans Cybern 2016; 46:2459-2472. [PMID: 26452294 DOI: 10.1109/tcyb.2015.2478656] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, we present a robotic assistance scheme which allows for impedance compensation with stiffness, damping, and mass parameters for hand manipulation tasks and we apply it to manual welding. The impedance compensation does not assume a preprogrammed hand trajectory. Rather, the intention of the human for the hand movement is estimated in real time using a smooth Kalman filter. The movement is restricted by compensatory virtual impedance in the directions perpendicular to the estimated direction of movement. With airbrush painting experiments, we test three sets of values for the impedance parameters as inspired from impedance measurements with manual welding. We apply the best of the tested sets for assistance in manual welding and perform welding experiments with professional and novice welders. We contrast three conditions: 1) welding with the robot's assistance; 2) with the robot when the robot is passive; and 3) welding without the robot. We demonstrate the effectiveness of the assistance through quantitative measures of both task performance and perceived user's satisfaction. The performance of both the novice and professional welders improves significantly with robotic assistance compared to welding with a passive robot. The assessment of user satisfaction shows that all novice and most professional welders appreciate the robotic assistance as it suppresses the tremors in the directions perpendicular to the movement for welding.
Collapse
|
44
|
|
45
|
Raffard S, Bortolon C, Khoramshahi M, Salesse RN, Burca M, Marin L, Bardy BG, Billard A, Macioce V, Capdevielle D. Humanoid robots versus humans: How is emotional valence of facial expressions recognized by individuals with schizophrenia? An exploratory study. Schizophr Res 2016; 176:506-513. [PMID: 27293136 DOI: 10.1016/j.schres.2016.06.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Revised: 05/31/2016] [Accepted: 06/01/2016] [Indexed: 11/17/2022]
Abstract
BACKGROUND The use of humanoid robots to play a therapeutic role in helping individuals with social disorders such as autism is a newly emerging field, but remains unexplored in schizophrenia. As the ability for robots to convey emotion appear of fundamental importance for human-robot interactions, we aimed to evaluate how schizophrenia patients recognize positive and negative facial emotions displayed by a humanoid robot. METHODS We included 21 schizophrenia outpatients and 17 healthy participants. In a reaction time task, they were shown photographs of human faces and of a humanoid robot (iCub) expressing either positive or negative emotions, as well as a non-social stimulus. Patients' symptomatology, mind perception, reaction time and number of correct answers were evaluated. RESULTS Results indicated that patients and controls recognized better and faster the emotional valence of facial expressions expressed by humans than by the robot. Participants were faster when responding to positive compared to negative human faces and inversely were faster for negative compared to positive robot faces. Importantly, participants performed worse when they perceived iCub as being capable of experiencing things (experience subscale of the mind perception questionnaire). In schizophrenia patients, negative correlations emerged between negative symptoms and both robot's and human's negative face accuracy. CONCLUSIONS Individuals do not respond similarly to human facial emotion and to non-anthropomorphic emotional signals. Humanoid robots have the potential to convey emotions to patients with schizophrenia, but their appearance seems of major importance for human-robot interactions.
Collapse
Affiliation(s)
- Stéphane Raffard
- Epsylon Laboratory Dynamic of Human Abilities & Health Behaviors, University of Montpellier 3, Montpellier, France; University Department of Adult Psychiatry, Hôpital de la Colombière, CHRU Montpellier, Montpellier University, Montpellier, France
| | - Catherine Bortolon
- Epsylon Laboratory Dynamic of Human Abilities & Health Behaviors, University of Montpellier 3, Montpellier, France; University Department of Adult Psychiatry, Hôpital de la Colombière, CHRU Montpellier, Montpellier University, Montpellier, France
| | - Mahdi Khoramshahi
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | - Robin N Salesse
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, 34090 Montpellier, France
| | - Marianna Burca
- Epsylon Laboratory Dynamic of Human Abilities & Health Behaviors, University of Montpellier 3, Montpellier, France
| | - Ludovic Marin
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, 34090 Montpellier, France
| | - Benoit G Bardy
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, 34090 Montpellier, France; Institut Universitaire de France, France
| | - Aude Billard
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | - Valérie Macioce
- Clinical & Epidemiological Research Unit, CHU, Montpellier, France
| | - Delphine Capdevielle
- University Department of Adult Psychiatry, Hôpital de la Colombière, CHRU Montpellier, Montpellier University, Montpellier, France; INSERM U-1061, Montpellier, France
| |
Collapse
|
46
|
Hang K, Li M, Stork JA, Bekiroglu Y, Pokorny FT, Billard A, Kragic D. Hierarchical Fingertip Space: A Unified Framework for Grasp Planning and In-Hand Grasp Adaptation. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2588879] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
47
|
Khoramshahi M, Shukla A, Raffard S, Bardy BG, Billard A. Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction. PLoS One 2016; 11:e0156874. [PMID: 27281341 PMCID: PMC4900607 DOI: 10.1371/journal.pone.0156874] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Accepted: 05/22/2016] [Indexed: 11/18/2022] Open
Abstract
Background The ability to follow one another’s gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other’s hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader’s role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view). Methodology/Principal Findings 43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar’s realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower’s ability to predict the avatar’s action. An analysis of the pattern of frequency across the two players’ hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar. Conclusion/Significance This work confirms that people can exploit gaze cues to predict another person’s movements and to better coordinate their motions with their partners, even when the partner is a computer-animated avatar. Moreover, this study contributes further evidence that implementing biological features, here task-relevant gaze cues, enable the humanoid robotic avatar to appear more human-like, and thus increase the user’s sense of affiliation.
Collapse
Affiliation(s)
- Mahdi Khoramshahi
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
- * E-mail:
| | - Ashwini Shukla
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | - Stéphane Raffard
- University Department of Adult Psychiatry, CHRU, & Laboratory Epsylon, EA 4556, Montpellier, France
| | - Benoît G. Bardy
- Movement to Health Laboratory, EuroMov, Montpellier-1 University, Montpelier, France
- Institut Universitaire de France, Paris, France
| | - Aude Billard
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| |
Collapse
|
48
|
|
49
|
El-Khoury S, Batzianoulis I, Antuvan CW, Contu S, Masia L, Micera S, Billard A. EMG-based learning approach for estimating wrist motion. Annu Int Conf IEEE Eng Med Biol Soc 2016; 2015:6732-5. [PMID: 26737838 DOI: 10.1109/embc.2015.7319938] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper proposes an EMG based learning approach for estimating the displacement along the 2-axes (abduction/adduction and flexion/extension) of the human wrist in real-time. The algorithm extracts features from the EMG electrodes on the upper and forearm and uses Support Vector Regression to estimate the intended displacement of the wrist. Using data recorded with the arm outstretched in various locations in space, we train the algorithm so as to allow robust prediction even when the subject moves his/her arm across several positions in space. The proposed approach was tested on five healthy subjects and showed that a R(2) index of 63.6% is obtained for generalization across different arm positions and wrist joint angles.
Collapse
|
50
|
|