1
|
Lakhnati Y, Pascher M, Gerken J. Exploring a GPT-based large language model for variable autonomy in a VR-based human-robot teaming simulation. Front Robot AI 2024; 11:1347538. [PMID: 38633059 PMCID: PMC11021771 DOI: 10.3389/frobt.2024.1347538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/13/2024] [Indexed: 04/19/2024] Open
Abstract
In a rapidly evolving digital landscape autonomous tools and robots are becoming commonplace. Recognizing the significance of this development, this paper explores the integration of Large Language Models (LLMs) like Generative pre-trained transformer (GPT) into human-robot teaming environments to facilitate variable autonomy through the means of verbal human-robot communication. In this paper, we introduce a novel simulation framework for such a GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality (VR) setting. This system allows users to interact with simulated robot agents through natural language, each powered by individual GPT cores. By means of OpenAI's function calling, we bridge the gap between unstructured natural language input and structured robot actions. A user study with 12 participants explores the effectiveness of GPT-4 and, more importantly, user strategies when being given the opportunity to converse in natural language within a simulated multi-robot environment. Our findings suggest that users may have preconceived expectations on how to converse with robots and seldom try to explore the actual language and cognitive capabilities of their simulated robot collaborators. Still, those users who did explore were able to benefit from a much more natural flow of communication and human-like back-and-forth. We provide a set of lessons learned for future research and technical implementations of similar systems.
Collapse
Affiliation(s)
- Younes Lakhnati
- Inclusive Human-Robot-Interaction, TU Dortmund University, Dortmund, NW, Germany
| | - Max Pascher
- Inclusive Human-Robot-Interaction, TU Dortmund University, Dortmund, NW, Germany
- Human-Computer Interaction, University of Duisburg-Essen, Essen, NW, Germany
| | - Jens Gerken
- Inclusive Human-Robot-Interaction, TU Dortmund University, Dortmund, NW, Germany
| |
Collapse
|
2
|
Xu J, Xu L, Ji A, Cao K. Learning robotic motion with mirror therapy framework for hemiparesis rehabilitation. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2022.103244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
3
|
Trajectory Generation and Control of a Lower Limb Exoskeleton for Gait Assistance. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01763-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
4
|
Research on control method of upper limb exoskeleton based on mixed perception model. ROBOTICA 2022. [DOI: 10.1017/s0263574722000480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
As one of the research hotspots in the field of rehabilitation robotics, the upper limb exoskeleton robot has been widely used in the field of rehabilitation. However, the existing methods cannot comprehensively and accurately reflect the motion state of patients, which may lead to overtraining and secondary injury of patients in the process of rehabilitation training. In this paper, an upper limb exoskeleton control method based on mixed perception model of motion intention and intensity is proposed, which is based on the 6 degree-of-freedom upper limb rehabilitation exoskeleton in the laboratory. First, the kinematic information and heart rate information in the rehabilitation process of patients are collected, corresponding to patients’ motion intention and motion intensity, and fused to obtain the mixed perception vector. Second, the motion perception model based on long short-term memory neural network is established to realize the prediction of upper limb motion trajectory of patients and compared with back-propagation neural network to prove its effectiveness. Finally, the control system is built, and both offline and online test of the control method proposed are implemented. The experimental results show that the method can achieve comprehensive motion state perception of patients, realize real-time and accurate prediction trajectory according to human motion intention and intensity. The average prediction accuracy is 95.3%, and predicted joint angle error is less than 5 degrees. Therefore, the control method based on mixed perception model has good robustness and universality, which provides a new method for the active control of upper limb exoskeleton.
Collapse
|
5
|
Hu S, Mendonca R, Johnson MJ, Kuchenbecker KJ. Robotics for Occupational Therapy: Learning Upper-Limb Exercises From Demonstrations. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3098945] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
6
|
Abstract
AbstractOver the last two decades, several deployments of robots for in-house assistance of older adults have been trialled. However, these solutions are mostly prototypes and remain unused in real-life scenarios. In this work, we review the historical and current landscape of the field, to try and understand why robots have yet to succeed as personal assistants in daily life. Our analysis focuses on two complementary aspects: the capabilities of the physical platform and the logic of the deployment. The former analysis shows regularities in hardware configurations and functionalities, leading to the definition of a set of six application-level capabilities (exploration, identification, remote control, communication, manipulation, and digital situatedness). The latter focuses on the impact of robots on the daily life of users and categorises the deployment of robots for healthcare interventions using three types of services: support, mitigation, and response. Our investigation reveals that the value of healthcare interventions is limited by a stagnation of functionalities and a disconnection between the robotic platform and the design of the intervention. To address this issue, we propose a novel co-design toolkit, which uses an ecological framework for robot interventions in the healthcare domain. Our approach connects robot capabilities with known geriatric factors, to create a holistic view encompassing both the physical platform and the logic of the deployment. As a case study-based validation, we discuss the use of the toolkit in the pre-design of the robotic platform for an pilot intervention, part of the EU large-scale pilot of the EU H2020 GATEKEEPER project.
Collapse
|
7
|
Hosseini SR, Taheri A, Alemi M, Meghdari A. One-shot Learning from Demonstration Approach Toward a Reciprocal Sign Language-based HRI. Int J Soc Robot 2021:1-13. [PMID: 34394771 PMCID: PMC8352758 DOI: 10.1007/s12369-021-00818-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/24/2021] [Indexed: 11/23/2022]
Abstract
This paper addresses the lack of proper Learning from Demonstration (LfD) architectures for Sign Language-based Human-Robot Interactions to make them more extensible. The paper proposes and implements a Learning from Demonstration structure for teaching new Iranian Sign Language signs to a teacher assistant social robot, RASA. This LfD architecture utilizes one-shot learning techniques and Convolutional Neural Network to learn to recognize and imitate a sign after seeing its demonstration (using a data glove) just once. Despite using a small, low diversity data set (~ 500 signs in 16 categories), the recognition module reached a promising 4-way accuracy of 70% on the test data and showed good potential for increasing the extensibility of sign vocabulary in sign language-based human-robot interactions. The expansibility and promising results of the one-shot Learning from Demonstration technique in this study are the main achievements of conducting such machine learning algorithms in social Human-Robot Interaction.
Collapse
Affiliation(s)
| | - Alireza Taheri
- Social and Cognitive Robotics Lab, Sharif University of Technology, Tehran, Iran
| | - Minoo Alemi
- Social and Cognitive Robotics Lab, Sharif University of Technology, Tehran, Iran
- Faculty of Humanities, Islamic Azad University, West Tehran Branch, Tehran, Iran
| | - Ali Meghdari
- Social and Cognitive Robotics Lab, Sharif University of Technology, Tehran, Iran
| |
Collapse
|
8
|
Shared control methodology based on head positioning and vector fields for people with quadriplegia. ROBOTICA 2021. [DOI: 10.1017/s0263574721000606] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractMobile robotic systems are used in a wide range of applications. Especially in the assistive field, they can enhance the mobility of the elderly and disable people. Modern robotic technologies have been implemented in wheelchairs to give them intelligence. Thus, by equipping wheelchairs with intelligent algorithms, controllers, and sensors, it is possible to share the wheelchair control between the user and the autonomous system. The present research proposes a methodology for intelligent wheelchairs based on head movements and vector fields. In this work, the user indicates where to go, and the system performs obstacle avoidance and planning. The focus is developing an assistive technology for people with quadriplegia that presents partial movements, such as the shoulder and neck musculature. The developed system uses shared control of velocity. It employs a depth camera to recognize obstacles in the environment and an inertial measurement unit (IMU) sensor to recognize the desired movement pattern measuring the user’s head inclination. The proposed methodology computes a repulsive vector field and works to increase maneuverability and safety. Thus, global localization and mapping are unnecessary. The results were evaluated by simulated models and practical tests using a Pioneer-P3DX differential robot to show the system’s applicability.
Collapse
|
9
|
Wu Q, Chen Y. Development of an Intention-Based Adaptive Neural Cooperative Control Strategy for Upper-Limb Robotic Rehabilitation. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3043197] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
10
|
Algorithm to Generate Trajectories in a Robotic Arm Using an LCD Touch Screen to Help Physically Disabled People. ELECTRONICS 2021. [DOI: 10.3390/electronics10020104] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In the last two-decade, robotics has attracted a lot of attention from the biomedical sectors, to help physically disabled people in their quotidian lives. Therefore, the research of robotics applied in the control of an anthropomorphic robotic arm to people assistance and rehabilitation has increased considerably. In this context, robotic control is one of the most important problems and is considered the main part of trajectory planning and motion control. The main solution for robotic control is inverse-kinematics, because it provides the angles of robotic arm joints. However, there are disadvantages in the algorithms presented by several authors because the trajectory calculation needs an optimization process which implies more calculations to generate an optimized trajectory. Moreover, the solutions presented by the authors implied devices where the people are dependent or require help from other people to control these devices. This article proposes an algorithm to calculate an accuracy trajectory in any time of interest using an LCD touch screen to calculate the inverse-kinematics and get the end-point of the gripper; the trajectory is calculated using a novel distribution function proposed which makes an easy way to get fast results to the trajectory planning. The obtained results show improvements to generate a safe and fast trajectory of an anthropomorphic robotic arm using an LCD touch screen allowed calculating short trajectories with minimal fingers moves.
Collapse
|
11
|
Abu-Dakka FJ, Valera A, Escalera JA, Abderrahim M, Page A, Mata V. Passive Exercise Adaptation for Ankle Rehabilitation Based on Learning Control Framework. SENSORS 2020; 20:s20216215. [PMID: 33142669 PMCID: PMC7662251 DOI: 10.3390/s20216215] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 10/14/2020] [Accepted: 10/26/2020] [Indexed: 11/16/2022]
Abstract
Ankle injuries are among the most common injuries in sport and daily life. However, for their recovery, it is important for patients to perform rehabilitation exercises. These exercises are usually done with a therapist's guidance to help strengthen the patient's ankle joint and restore its range of motion. However, in order to share the load with therapists so that they can offer assistance to more patients, and to provide an efficient and safe way for patients to perform ankle rehabilitation exercises, we propose a framework that integrates learning techniques with a 3-PRS parallel robot, acting together as an ankle rehabilitation device. In this paper, we propose to use passive rehabilitation exercises for dorsiflexion/plantar flexion and inversion/eversion ankle movements. The therapist is needed in the first stage to design the exercise with the patient by teaching the robot intuitively through learning from demonstration. We then propose a learning control scheme based on dynamic movement primitives and iterative learning control, which takes the designed exercise trajectory as a demonstration (an input) together with the recorded forces in order to reproduce the exercise with the patient for a number of repetitions defined by the therapist. During the execution, our approach monitors the sensed forces and adapts the trajectory by adding the necessary offsets to the original trajectory to reduce its range without modifying the original trajectory and subsequently reducing the measured forces. After a predefined number of repetitions, the algorithm restores the range gradually, until the patient is able to perform the originally designed exercise. We validate the proposed framework with both real experiments and simulation using a Simulink model of the rehabilitation parallel robot that has been developed in our lab.
Collapse
Affiliation(s)
- Fares J. Abu-Dakka
- Intelligent Robotics Group, Department of Electrical Engineering and Automation (EEA), Aalto University, 02150 Espoo, Finland
- Correspondence:
| | - Angel Valera
- Instituto Universitario de Automática e Informática Industrial (ai2), Universitat Politècnica de València, 46022 Valencia, Spain;
| | - Juan A. Escalera
- Instituto Nacional de Técnica Aeroespacial (INTA), 28330 San Martín de la Vega, Spain;
| | - Mohamed Abderrahim
- Department of Systems Engineering and Automation, Carlos III University of Madrid, 28911 Leganés, Spain;
| | - Alvaro Page
- Instituto Universitario de Ingeniería Mecánica y Biomecánica, Universitat Politècnica de València, 46022 Valencia, Spain;
| | - Vicente Mata
- Departamento de Ingeniería Mecánica y de Materiales, Universitat Politècnica de València, 46022 Valencia, Spain;
| |
Collapse
|
12
|
Fong J, Ocampo R, Gross DP, Tavakoli M. Intelligent Robotics Incorporating Machine Learning Algorithms for Improving Functional Capacity Evaluation and Occupational Rehabilitation. JOURNAL OF OCCUPATIONAL REHABILITATION 2020; 30:362-370. [PMID: 32253595 DOI: 10.1007/s10926-020-09888-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Introduction Occupational rehabilitation often involves functional capacity evaluations (FCE) that use simulated work tasks to assess work ability. Currently, there exists no single, streamlined solution to simulate all or a large number of standard work tasks. Such a system would improve FCE and functional rehabilitation through simulating reaching maneuvers and more dexterous functional tasks that are typical of workplace activities. This paper reviews efforts to develop robotic FCE solutions that incorporate machine learning algorithms. Methods We reviewed the literature regarding rehabilitation robotics, with an emphasis on novel techniques incorporating robotics and machine learning into FCE. Results Rehabilitation robotics aims to improve the assessment and rehabilitation of injured workers by providing methods for easily simulating workplace tasks using intelligent robotic systems. Machine learning-based approaches combine the benefits of robotic systems with the expertise and experience of human therapists. These innovations have the potential to improve the quantification of function as well as learn the haptic interactions provided by therapists to assist patients during assessment and rehabilitation. This is done by allowing a robot to learn based on a therapist's motions ("demonstrations") what the desired workplace activity ("task") is and how to recreate it for a worker with an injury ("patient"). Through Telerehabilitation and internet connectivity, these robotic assessment techniques can be used over a distance to reach rural and remote locations. Conclusions While the research is in the early stages, robotics with integrated machine learning algorithms have great potential for improving traditional FCE practice.
Collapse
Affiliation(s)
- Jason Fong
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada
| | - Renz Ocampo
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada
| | - Douglas P Gross
- Department of Physical Therapy, University of Alberta, 2-50 Corbett Hall, Alberta,, T6G 2G4, Edmonton, Canada.
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada
| |
Collapse
|
13
|
Lauretti C, Cordella F, Tamantini C, Gentile C, Luzio FSD, Zollo L. A Surgeon-Robot Shared Control for Ergonomic Pedicle Screw Fixation. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2972892] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
14
|
Pareek S, Kesavadas T. iART: Learning From Demonstration for Assisted Robotic Therapy Using LSTM. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2019.2961845] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
A Hybrid Joint/Cartesian DMP-Based Approach for Obstacle Avoidance of Anthropomorphic Assistive Robots. Int J Soc Robot 2019. [DOI: 10.1007/s12369-019-00597-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
Wang WS, Mendonca R, Kording K, Avery M, Johnson MJ. Towards Data-Driven Autonomous Robot-Assisted Physical Rehabilitation Therapy. IEEE Int Conf Rehabil Robot 2019; 2019:34-39. [PMID: 31374603 DOI: 10.1109/icorr.2019.8779555] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Task-oriented therapy consists of three stages: demonstration, observation and assistance. While demonstration using robots has been extensively studied, the other two stages rarely involve robots. This paper focuses on the transition between observation and assistance. More specifically, we tackle the robot's decision making problem of whether to assist a patient or not based on the observation. The proposed method is to train a discrete tunnel shape 3-D decision boundary through correct demonstration to classify motions. Additional conditions such as slow progress, self correction and overshot motions are taken into account of the decision making. Preliminary experiments have been performed on BAXTER robot for a cup reaching task. The BAXTER robot is programmed to react according to the decision boundary. It assists the patient when the patient's hand position is determined by the proposed algorithm to be unacceptable. Multiple cases including correct motion, continuous assistance, overshot, misaim and slow progress are tested. Results have confirmed the feasibility of the proposed method, which can reduce the current shortage of physical rehabilitation therapists.
Collapse
|
17
|
A Therapist-Taught Robotic System for Assistance During Gait Therapy Targeting Foot Drop. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2018.2890674] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
18
|
Abstract
Robotic platforms are taking their place in the operating room because they provide more stability and accuracy during surgery. Although most of these platforms are teleoperated, a lot of research is currently being carried out to design collaborative platforms. The objective is to reduce the surgeon workload through the automation of secondary or auxiliary tasks, which would benefit both surgeons and patients by facilitating the surgery and reducing the operation time. One of the most important secondary tasks is the endoscopic camera guidance, whose automation would allow the surgeon to be concentrated on handling the surgical instruments. This paper proposes a novel autonomous camera guidance approach for laparoscopic surgery. It is based on learning from demonstration (LfD), which has demonstrated its feasibility to transfer knowledge from humans to robots by means of multiple expert showings. The proposed approach has been validated using an experimental surgical robotic platform to perform peg transferring, a typical task that is used to train human skills in laparoscopic surgery. The results show that camera guidance can be easily trained by a surgeon for a particular task. Later, it can be autonomously reproduced in a similar way to one carried out by a human. Therefore, the results demonstrate that the use of learning from demonstration is a suitable method to perform autonomous camera guidance in collaborative surgical robotic platforms.
Collapse
|
19
|
Lauretti C, Cordella F, Ciancio AL, Trigili E, Catalan JM, Badesa FJ, Crea S, Pagliara SM, Sterzi S, Vitiello N, Garcia Aracil N, Zollo L. Learning by Demonstration for Motion Planning of Upper-Limb Exoskeletons. Front Neurorobot 2018; 12:5. [PMID: 29527161 PMCID: PMC5829101 DOI: 10.3389/fnbot.2018.00005] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Accepted: 01/31/2018] [Indexed: 11/13/2022] Open
Abstract
The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured.
Collapse
Affiliation(s)
- Clemente Lauretti
- Research Unit of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico, Rome, Italy
| | - Francesca Cordella
- Research Unit of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico, Rome, Italy
| | - Anna Lisa Ciancio
- Research Unit of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico, Rome, Italy
| | - Emilio Trigili
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Jose Maria Catalan
- Biomedical Neuroengineering Research Group, Miguel Hernandez University, Elche, Spain
| | - Francisco Javier Badesa
- Departamento de Ingeniería en Automática, Electrónica, Arquitectura y Redes de Computadores, Universidad de Cádiz, Cádiz, Spain
| | - Simona Crea
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | | | - Silvia Sterzi
- Unit of Physical and Rehabilitation Medicine, Università Campus Bio-Medico, Rome, Italy
| | - Nicola Vitiello
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Fondazione Don Carlo Gnocchi, Firenze, Italy
| | - Nicolas Garcia Aracil
- Biomedical Neuroengineering Research Group, Miguel Hernandez University, Elche, Spain
| | - Loredana Zollo
- Research Unit of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico, Rome, Italy
| |
Collapse
|
20
|
|
21
|
Laureiti C, Cordella F, di Luzio FS, Saccucci S, Davalli A, Sacchetti R, Zollo L. Comparative performance analysis of M-IMU/EMG and voice user interfaces for assistive robots. IEEE Int Conf Rehabil Robot 2017; 2017:1001-1006. [PMID: 28813952 DOI: 10.1109/icorr.2017.8009380] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
People with a high level of disability experience great difficulties to perform activities of daily living and resort to their residual motor functions in order to operate assistive devices. The commercially available interfaces used to control assistive manipulators are typically based on joysticks and can be used only by subjects with upper-limb residual mobilities. Many other solutions can be found in the literature, based on the use of multiple sensory systems for detecting the human motion intention and state. Some of them require a high cognitive workload for the user. Some others are more intuitive and easy to use but have not been widely investigated in terms of usability and user acceptance. The objective of this work is to propose an intuitive and robust user interface for assistive robots, not obtrusive for the user and easily adaptable for subjects with different levels of disability. The proposed user interface is based on the combination of M-IMU and EMG for the continuous control of an arm-hand robotic system by means of M-IMUs. The system has been experimentally validated and compared to a standard voice interface. Sixteen healthy subjects volunteered to participate in the study: 8 subjects used the combined M-IMU/EMG robot control, and 8 subjects used the voice control. The arm-hand robotic system made of the KUKA LWR 4+ and the IH2 Azzurra hand was controlled to accomplish the daily living task of drinking. Performance indices and evaluation scales were adopted to assess performance of the two interfaces.
Collapse
|
22
|
Noccaro A, Cordella F, Zollo L, Di Pino G, Guglielmelli E, Formica D. A teleoperated control approach for anthropomorphic manipulator using magneto-inertial sensors. RO-MAN ... : THE ... IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION : PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION 2017; 2017:156-161. [PMID: 30949293 DOI: 10.1109/roman.2017.8172295] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper we propose and validate a teleoperated control approach for an anthropomorphic redundant robotic manipulator, using magneto-inertial sensors (IMUs). The proposed method allows mapping the motion of the human arm (used as the master) on the robot end-effector (the slave). We record arm movements using IMU sensors, and calculate human forward kinematics to be mapped on robot movements. In order to solve robot kinematic redundancy, we implemented different algorithms for inverse kinematics that allows imposing anthropomorphism criteria on robot movements. The main objective is to let the user to control the robotic platform in an easy and intuitive manner by providing the control input freely moving his/her own arm and exploiting redundancy and anthropomorphism criteria in order to achieve human-like behaviour on the robot arm. Therefore, three inverse kinematics algorithms are implemented: Damped Least Squares (DLS), Elastic Potential (EP) and Augmented Jacobian (AJ). In order to evaluate the performance of the algorithms, four healthy subjects have been asked to control the motion of an anthropomorphic robot arm (i.e. the Kuka Light Weight Robot 4+) through four magneto-inertial sensors (i.e. Xsens Wireless Motion Tracking sensors - MTw) positioned on their arm. Anthropomorphism indices and position and orientation errors between the human hand pose and the robot end-effector pose were evaluated to assess the performance of our approach.
Collapse
Affiliation(s)
- A Noccaro
- Unit of Neurophysiology and Neuroengineering of Human-Technology Interaction, Department of Medicine, Università Campus Bio-Medico, via Alvaro del Portillo 21, 00128, Rome, Italy
| | - F Cordella
- Unit of Biomedical Robotics and Biomicrosystems, Department of Engineering, Università Campus Bio-Medico, via Alvaro del Portillo 21, 00128, Rome, Italy
| | - L Zollo
- Unit of Biomedical Robotics and Biomicrosystems, Department of Engineering, Università Campus Bio-Medico, via Alvaro del Portillo 21, 00128, Rome, Italy
| | - G Di Pino
- Unit of Neurophysiology and Neuroengineering of Human-Technology Interaction, Department of Medicine, Università Campus Bio-Medico, via Alvaro del Portillo 21, 00128, Rome, Italy
| | - E Guglielmelli
- Unit of Biomedical Robotics and Biomicrosystems, Department of Engineering, Università Campus Bio-Medico, via Alvaro del Portillo 21, 00128, Rome, Italy
| | - D Formica
- Unit of Biomedical Robotics and Biomicrosystems, Department of Engineering and with the Unit of Neurophysiology and Neuroengineering of Human-Technology Interaction, Department of Medicine, Università Campus Bio-Medico di Roma, via Alvaro del Portillo 21, 00128, Rome, Italy
| |
Collapse
|