1
|
Mielke E, Townsend E, Wingate D, Salmon JL, Killpack MD. Human-robot planar co-manipulation of extended objects: data-driven models and control from human-human dyads. Front Neurorobot 2024; 18:1291694. [PMID: 38410142 PMCID: PMC10894988 DOI: 10.3389/fnbot.2024.1291694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 01/12/2024] [Indexed: 02/28/2024] Open
Abstract
Human teams are able to easily perform collaborative manipulation tasks. However, simultaneously manipulating a large extended object for a robot and human is a difficult task due to the inherent ambiguity in the desired motion. Our approach in this paper is to leverage data from human-human dyad experiments to determine motion intent for a physical human-robot co-manipulation task. We do this by showing that the human-human dyad data exhibits distinct torque triggers for a lateral movement. As an alternative intent estimation method, we also develop a deep neural network based on motion data from human-human trials to predict future trajectories based on past object motion. We then show how force and motion data can be used to determine robot control in a human-robot dyad. Finally, we compare human-human dyad performance to the performance of two controllers that we developed for human-robot co-manipulation. We evaluate these controllers in three-degree-of-freedom planar motion where determining if the task involves rotation or translation is ambiguous.
Collapse
Affiliation(s)
- Erich Mielke
- Robotics and Dynamics Laboratory, Brigham Young University, Mechanical Engineering, Provo, UT, United States
| | - Eric Townsend
- Robotics and Dynamics Laboratory, Brigham Young University, Mechanical Engineering, Provo, UT, United States
| | - David Wingate
- Robotics and Dynamics Laboratory, Brigham Young University, Mechanical Engineering, Provo, UT, United States
| | - John L Salmon
- Robotics and Dynamics Laboratory, Brigham Young University, Mechanical Engineering, Provo, UT, United States
| | - Marc D Killpack
- Robotics and Dynamics Laboratory, Brigham Young University, Mechanical Engineering, Provo, UT, United States
| |
Collapse
|
2
|
Kato Y, Tsuji T, Cikajlo I. Feedback Type May Change the EMG Pattern and Kinematics During Robot Supported Upper Limb Reaching Task. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:173-179. [PMID: 38487092 PMCID: PMC10939324 DOI: 10.1109/ojemb.2024.3363137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 12/02/2023] [Accepted: 02/02/2024] [Indexed: 03/17/2024] Open
Abstract
Haptic interfaces and virtual reality (VR) technology have been increasingly introduced in rehabilitation, facilitating the provision of various feedback and task conditions. However, correspondence between the feedback/task conditions and movement strategy during reaching tasks remains a question. To investigate movement strategy, we assessed velocity parameters and peak latency of electromyography. Ten neuromuscularly intact volunteers participated in the measurement using haptic interface and VR. Concurrent visual feedback and various terminal feedback (e.g., visual, haptic, visual and haptic) were given. Additionally, the object size for the reaching task was changed. The results demonstrated terminal haptic feedback had a significant impact on kinematic parameters; showed [Formula: see text] s ([Formula: see text]) shorter movement time and [Formula: see text] m/s ([Formula: see text]) higher mean velocity compared to no terminal feedback. Also, smaller peak latency was observed in different muscle regions based on the object size.
Collapse
Affiliation(s)
- Yasuhiro Kato
- Graduate School of Science and EngineeringSaitama UniversitySakura-ku338-8570Japan
| | - Toshiaki Tsuji
- Graduate School of Science and EngineeringSaitama UniversitySakura-ku338-8570Japan
| | - Imre Cikajlo
- University Rehabilitation Institute Republic of Slovenia1000LjubljanaSlovenia
- School of Engineering and ManagementUniversity of Nova Gorica5271VipavaSlovenia
| |
Collapse
|
3
|
Chen F, Wang F, Dong Y, Yong Q, Yang X, Zheng L, Gao Y, Su H. Sensor Fusion-Based Anthropomorphic Control of a Robotic Arm. Bioengineering (Basel) 2023; 10:1243. [PMID: 38002367 PMCID: PMC10669049 DOI: 10.3390/bioengineering10111243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 10/12/2023] [Accepted: 10/20/2023] [Indexed: 11/26/2023] Open
Abstract
The main goal of this research is to develop a highly advanced anthropomorphic control system utilizing multiple sensor technologies to achieve precise control of a robotic arm. Combining Kinect and IMU sensors, together with a data glove, we aim to create a multimodal sensor system for capturing rich information of human upper body movements. Specifically, the four angles of upper limb joints are collected using the Kinect sensor and IMU sensor. In order to improve the accuracy and stability of motion tracking, we use the Kalman filter method to fuse the Kinect and IMU data. In addition, we introduce data glove technology to collect the angle information of the wrist and fingers in seven different directions. The integration and fusion of multiple sensors provides us with full control over the robotic arm, giving it flexibility with 11 degrees of freedom. We successfully achieved a variety of anthropomorphic movements, including shoulder flexion, abduction, rotation, elbow flexion, and fine movements of the wrist and fingers. Most importantly, our experimental results demonstrate that the anthropomorphic control system we developed is highly accurate, real-time, and operable. In summary, the contribution of this study lies in the creation of a multimodal sensor system capable of capturing and precisely controlling human upper limb movements, which provides a solid foundation for the future development of anthropomorphic control technologies. This technology has a wide range of application prospects and can be used for rehabilitation in the medical field, robot collaboration in industrial automation, and immersive experience in virtual reality environments.
Collapse
Affiliation(s)
- Furong Chen
- Department of Mechanical Engineering, College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130012, China; (F.C.); (F.W.)
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130022, China
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
| | - Feilong Wang
- Department of Mechanical Engineering, College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130012, China; (F.C.); (F.W.)
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130022, China
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
| | - Yanling Dong
- School of Foreign Languages & Literature, Shandong University, Jinan 250000, China;
| | - Qi Yong
- ESIEE Paris, 2 Boulevard Blaise Pascal, 93160 Noisy-le-Grand, France;
| | - Xiaolong Yang
- Department of Mechanical Engineering, College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130012, China; (F.C.); (F.W.)
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130022, China
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
| | - Long Zheng
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130022, China
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
| | - Yi Gao
- Department of Mechanical Engineering, College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130012, China; (F.C.); (F.W.)
| | - Hang Su
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
| |
Collapse
|
4
|
Research Perspectives in Collaborative Assembly: A Review. ROBOTICS 2023. [DOI: 10.3390/robotics12020037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023] Open
Abstract
In recent years, the emergence of Industry 4.0 technologies has introduced manufacturing disruptions that necessitate the development of accompanying socio-technical solutions. There is growing interest for manufacturing enterprises to embrace the drivers of the Smart Industry paradigm. Among these drivers, human–robot physical co-manipulation of objects has gained significant interest in the literature on assembly operations. Motivated by the requirement for human dyads between the human and the robot counterpart, this study investigates recent literature on the implementation methods of human–robot collaborative assembly scenarios. Using a combination of strings, the researchers performed a systematic review search, sourcing 451 publications from various databases (Science Direct (253), IEEE Xplore (49), Emerald (32), PudMed (21) and SpringerLink (96)). A coding assignment in Eppi-Reviewer helped screen the literature based on ‘exclude’ and ‘include’ criteria. The final number of full-text publications considered in this literature review is 118 peer-reviewed research articles published up until September 2022. The findings anticipate that research publications in the fields of human–robot collaborative assembly will continue to grow. Understanding and modeling the human interaction and behavior in robot co-assembly is crucial to the development of future sustainable smart factories. Machine vision and digital twins modeling begin to emerge as promising interfaces for the evaluation of tasks distribution strategies for mitigating the actual human ergonomic and safety risks in collaborative assembly solutions design.
Collapse
|
5
|
Lorenzini M, Lagomarsino M, Fortini L, Gholami S, Ajoudani A. Ergonomic human-robot collaboration in industry: A review. Front Robot AI 2023; 9:813907. [PMID: 36743294 PMCID: PMC9893795 DOI: 10.3389/frobt.2022.813907] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 08/26/2022] [Indexed: 01/20/2023] Open
Abstract
In the current industrial context, the importance of assessing and improving workers' health conditions is widely recognised. Both physical and psycho-social factors contribute to jeopardising the underlying comfort and well-being, boosting the occurrence of diseases and injuries, and affecting their quality of life. Human-robot interaction and collaboration frameworks stand out among the possible solutions to prevent and mitigate workplace risk factors. The increasingly advanced control strategies and planning schemes featured by collaborative robots have the potential to foster fruitful and efficient coordination during the execution of hybrid tasks, by meeting their human counterparts' needs and limits. To this end, a thorough and comprehensive evaluation of an individual's ergonomics, i.e. direct effect of workload on the human psycho-physical state, must be taken into account. In this review article, we provide an overview of the existing ergonomics assessment tools as well as the available monitoring technologies to drive and adapt a collaborative robot's behaviour. Preliminary attempts of ergonomic human-robot collaboration frameworks are presented next, discussing state-of-the-art limitations and challenges. Future trends and promising themes are finally highlighted, aiming to promote safety, health, and equality in worldwide workplaces.
Collapse
Affiliation(s)
- Marta Lorenzini
- Human-Robot Interfaces and Physical Interaction Laboratory, Italian Institute of Technology, Genoa, Italy,*Correspondence: Marta Lorenzini,
| | - Marta Lagomarsino
- Human-Robot Interfaces and Physical Interaction Laboratory, Italian Institute of Technology, Genoa, Italy,Neuroengineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Polytechnic University of Milan, Milan, Italy
| | - Luca Fortini
- Human-Robot Interfaces and Physical Interaction Laboratory, Italian Institute of Technology, Genoa, Italy,Neuroengineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Polytechnic University of Milan, Milan, Italy
| | - Soheil Gholami
- Human-Robot Interfaces and Physical Interaction Laboratory, Italian Institute of Technology, Genoa, Italy,Neuroengineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Polytechnic University of Milan, Milan, Italy
| | - Arash Ajoudani
- Human-Robot Interfaces and Physical Interaction Laboratory, Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
6
|
Dimova-Edeleva V, Ehrlich SK, Cheng G. Brain computer interface to distinguish between self and other related errors in human agent collaboration. Sci Rep 2022; 12:20764. [PMID: 36456595 PMCID: PMC9715724 DOI: 10.1038/s41598-022-24899-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Accepted: 11/22/2022] [Indexed: 12/05/2022] Open
Abstract
When a human and machine collaborate on a shared task, ambiguous events might occur that could be perceived as an error by the human partner. In such events, spontaneous error-related potentials (ErrPs) are evoked in the human brain. Knowing whom the human perceived as responsible for the error would help a machine in co-adaptation and shared control paradigms to better adapt to human preferences. Therefore, we ask whether self- and agent-related errors evoke different ErrPs. Eleven subjects participated in an electroencephalography human-agent collaboration experiment with a collaborative trajectory-following task on two collaboration levels, where movement errors occurred as trajectory deviations. Independently of the collaboration level, we observed a higher amplitude of the responses on the midline central Cz electrode for self-related errors compared to observed errors made by the agent. On average, Support Vector Machines classified self- and agent-related errors with 72.64% accuracy using subject-specific features. These results demonstrate that ErrPs can tell if a person relates an error to themselves or an external autonomous agent during collaboration. Thus, the collaborative machine will receive more informed feedback for the error attribution that allows appropriate error identification, a possibility for correction, and avoidance in future actions.
Collapse
Affiliation(s)
- Viktorija Dimova-Edeleva
- grid.6936.a0000000123222966Munich Institute of Robotics and Machine Intelligence (MIRMI), Technical University of Munich, Munich, Germany
| | - Stefan K. Ehrlich
- grid.6936.a0000000123222966TUM School of Computation, Information and Technology, Department of Computer Engineering, Institute of Cognitive Systems, Technical University of Munich, Munich, Germany
| | - Gordon Cheng
- grid.6936.a0000000123222966TUM School of Computation, Information and Technology, Department of Computer Engineering, Institute of Cognitive Systems, Technical University of Munich, Munich, Germany
| |
Collapse
|
7
|
Q-Learning-based model predictive variable impedance control for physical human-robot collaboration. ARTIF INTELL 2022. [DOI: 10.1016/j.artint.2022.103771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
8
|
Yu X, Liu P, He W, Liu Y, Chen Q, Ding L. Human-Robot Variable Impedance Skills Transfer Learning Based on Dynamic Movement Primitives. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3154469] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Xinbo Yu
- Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China
| | - Peisen Liu
- Institute of Artificial Intelligence, School of Automation and Electrical Engineering, and Beijing Advanced Innovation Center for Materials Genome Engineering, University of Science and Technology Beijing, Beijing, China
| | - Wei He
- Institute of Artificial Intelligence, School of Automation and Electrical Engineering, and Beijing Advanced Innovation Center for Materials Genome Engineering, University of Science and Technology Beijing, Beijing, China
| | - Yu Liu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
| | - Qi Chen
- Institute of Artificial Intelligence, School of Automation and Electrical Engineering, and Beijing Advanced Innovation Center for Materials Genome Engineering, University of Science and Technology Beijing, Beijing, China
| | - Liang Ding
- State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
9
|
Tang Z, Zhang L, Chen X, Ying J, Wang X, Wang H. Wearable Supernumerary Robotic Limb System Using a Hybrid Control Approach Based on Motor Imagery and Object Detection. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1298-1309. [PMID: 35511846 DOI: 10.1109/tnsre.2022.3172974] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Motor disorder of upper limbs has seriously affected the daily life of the patients with hemiplegia after stroke. We developed a wearable supernumerary robotic limb (SRL) system using a hybrid control approach based on motor imagery (MI) and object detection for upper-limb motion assistance. SRL system included an SRL hardware subsystem and a hybrid control software subsystem. The system obtained the patient's motion intention through MI electroencephalogram (EEG) recognition method based on graph convolutional network (GCN) and gated recurrent unit network (GRU) to control the left and right movements of SRL, and the object detection technology was used together for a quick grasp of target objects to compensate for the disadvantages when using MI EEG alone like fewer control instructions and lower control efficiency. Offline training experiment was designed to obtain subjects' MI recognition models and evaluate the feasibility of the MI EEG recognition method; online control experiment was designed to verify the effectiveness of our wearable SRL system. The results showed that the proposed MI EEG recognition method (GCN+GRU) could effectively improve the MI classification accuracy (90.04% ± 2.36%) compared with traditional methods; all subjects were able to complete the target object grasping tasks within 23 seconds by controlling the SRL, and the highest average grasping success rate achieved 90.67% in bag grasping task. The SRL system can effectively assist people with upper-limb motor disorder to perform upper-limb tasks in daily life by natural human-robot interaction, and improve their ability of self-help and enhance their confidence of life.
Collapse
|
10
|
Matsumoto S, Washburn A, Riek LD. A Framework to Explore Proximate Human-Robot Coordination. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Proximate human-robot teaming (pxHRT) is a complex subspace within human-robot interaction. Studies in this space involve a range of equipment and methods, including the ability to sense people and robots precisely. Research in this area draws from a wide variety of other fields, from human-human interaction to control theory, making study design complex, particularly for those outside the field of HRI. In this paper, we introduce a framework that helps researchers consider tradeoffs across various task contexts, platforms, sensors, and analysis methods; metrics frequently used in the field; and common challenges researchers may face. We demonstrate the use of the framework via a case study which employs an autonomous mobile manipulator continuously engaging in shared workspace, handover, and co-manipulation tasks with people, and explores the effect of cognitive workload on pxHRT dynamics. We also demonstrate the utility of the framework in a case study with two groups of researchers new to pxHRT. With this framework, we hope to enable researchers, especially those outside HRI, to more thoroughly consider these complex components within their studies, more easily design experiments, and more fully explore research questions within the space of pxHRT.
Collapse
|
11
|
Lin TC, Krishnan AU, Li Z. Intuitive, Efficient and Ergonomic Tele-Nursing Robot Interfaces: Design Evaluation and Evolution. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Tele-nursing robots provide a safe approach for patient-caring in quarantine areas. For effective nurse-robot collaboration, ergonomic teleoperation and intuitive interfaces with low physical and cognitive workload must be developed. We propose a framework to evaluate the control interfaces to iteratively develop an intuitive, efficient, and ergonomic teleoperation interface. The framework is a hierarchical procedure that incorporates general to specific assessment and its role in design evolution. We first present pre-defined objective and subjective metrics used to evaluate three representative contemporary teleoperation interfaces. The results indicate that teleoperation via human motion mapping outperforms the gamepad and stylus interfaces. The trade-off with using motion mapping as a teleoperation interface is the non-trivial physical fatigue. To understand the impact of heavy physical demand during motion mapping teleoperation, we propose an objective assessment of physical workload in teleoperation using electromyography (EMG). We find that physical fatigue happens in the actions that involve precise manipulation and steady posture maintenance. We further implemented teleoperation assistance in the form of shared autonomy to eliminate the fatigue-causing component in robot teleoperation via motion mapping. The experimental results show that the autonomous feature effectively reduces the physical effort while improving the efficiency and accuracy of the teleoperation interface.
Collapse
Affiliation(s)
- Tsung-Chi Lin
- Worcester Polytechnic Institute, Robotics Engineering
| | | | - Zhi Li
- Worcester Polytechnic Institute, Robotics Engineering
| |
Collapse
|
12
|
EMG-Based Variable Impedance Control With Passivity Guarantees for Collaborative Robotics. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3149575] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
13
|
Tanaka Y, Shiraki S, Katayama K, Minamizawa K, Prattichizzo D. Bilaterally Shared Haptic Perception for Human-Robot Collaboration in Grasping Operation. JOURNAL OF ROBOTICS AND MECHATRONICS 2021. [DOI: 10.20965/jrm.2021.p1104] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Tactile sensations are crucial for achieving precise operations. A haptic connection between a human operator and a robot has the potential to promote smooth human-robot collaboration (HRC). In this study, we assemble a bilaterally shared haptic system for grasping operations, such as both hands of humans using a bottle cap-opening task. A robot arm controls the grasping force according to the tactile information from the human that opens the cap with a finger-attached acceleration sensor. Then, the grasping force of the robot arm is fed back to the human using a wearable squeezing display. Three experiments are conducted: measurement of the just noticeable difference in the tactile display, a collaborative task with different bottles under two conditions, with and without tactile feedback, including psychological evaluations using a questionnaire, and a collaborative task under an explicit strategy. The results obtained showed that the tactile feedback provided the confidence that the cooperative robot was adjusting its action and improved the stability of the task with the explicit strategy. The results indicate the effectiveness of the tactile feedback and the requirement for an explicit strategy of operators, providing insight into the design of an HRC with bilaterally shared haptic perception.
Collapse
|
14
|
Zheng E, Zhang J, Wang Q, Qiao H. Continuous Multi-DoF Wrist Kinematics Estimation Based on a Human-Machine Interface With Electrical-Impedance-Tomography. Front Neurorobot 2021; 15:734525. [PMID: 34658831 PMCID: PMC8515921 DOI: 10.3389/fnbot.2021.734525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 08/16/2021] [Indexed: 11/21/2022] Open
Abstract
This study proposed a multiple degree-of-freedom (DoF) continuous wrist angle estimation approach based on an electrical impedance tomography (EIT) interface. The interface can inspect the spatial information of deep muscles with a soft elastic fabric sensing band, extending the measurement scope of the existing muscle-signal-based sensors. The designed estimation algorithm first extracted the mutual correlation of the EIT regions with a kernel function, and second used a regularization procedure to select the optimal coefficients. We evaluated the method with different features and regression models on 12 healthy subjects when they performed six basic wrist joint motions. The average root-mean-square error of the 3-DoF estimation task was 7.62°, and the average R2 was 0.92. The results are comparable to state-of-the-art with sEMG signals in multi-DoF tasks. Future endeavors will be paid in this new direction to get more promising results.
Collapse
Affiliation(s)
- Enhao Zheng
- The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jingzhi Zhang
- The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of General Engineering, Beihang University, Beijing, China
| | - Qining Wang
- Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing, China
| | - Hong Qiao
- The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
15
|
Hamad YM, Aydin Y, Basdogan C. Adaptive Human Force Scaling via Admittance Control for Physical Human-Robot Interaction. IEEE TRANSACTIONS ON HAPTICS 2021; 14:750-761. [PMID: 33826517 DOI: 10.1109/toh.2021.3071626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The goal of this article is to design an admittance controller for a robot to adaptively change its contribution to a collaborative manipulation task executed with a human partner to improve the task performance. This has been achieved by adaptive scaling of human force based on her/his movement intention while paying attention to the requirements of different task phases. In our approach, movement intentions of human are estimated from measured human force and velocity of manipulated object, and converted to a quantitative value using a fuzzy logic scheme. This value is then utilized as a variable gain in an admittance controller to adaptively adjust the contribution of robot to the task without changing the admittance time constant. We demonstrate the benefits of the proposed approach by a pHRI experiment utilizing Fitts' reaching movement task. The results of the experiment show that there is a) an optimum admittance time constant maximizing the human force amplification and b) a desirable admittance gain profile which leads to a more effective co-manipulation in terms of overall task performance.
Collapse
|
16
|
Lombardi M, Liuzza D, di Bernardo M. Dynamic Input Deep Learning Control of Artificial Avatars in a Multi-Agent Joint Motor Task. Front Robot AI 2021; 8:665301. [PMID: 34434967 PMCID: PMC8381333 DOI: 10.3389/frobt.2021.665301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 07/20/2021] [Indexed: 11/13/2022] Open
Abstract
In many real-word scenarios, humans and robots are required to coordinate their movements in joint tasks to fulfil a common goal. While several examples regarding dyadic human robot interaction exist in the current literature, multi-agent scenarios in which one or more artificial agents need to interact with many humans are still seldom investigated. In this paper we address the problem of synthesizing an autonomous artificial agent to perform a paradigmatic oscillatory joint task in human ensembles while exhibiting some desired human kinematic features. We propose an architecture based on deep reinforcement learning which is flexible enough to make the artificial agent interact with human groups of different sizes. As a paradigmatic coordination task we consider a multi-agent version of the mirror game, an oscillatory motor task largely used in the literature to study human motor coordination.
Collapse
Affiliation(s)
- Maria Lombardi
- Department of Engineering Mathematics, University of Bristol, Bristol, United Kingdom.,Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
| | - Davide Liuzza
- ENEA Fusion and Nuclear Safety Department, Frascati, Italy
| | - Mario di Bernardo
- Department of Engineering Mathematics, University of Bristol, Bristol, United Kingdom.,Scuola Superiore Meridionale, University of Naples Federico II, Naples, Italy
| |
Collapse
|
17
|
Doornebosch LM, Abbink DA, Peternel L. Analysis of Coupling Effect in Human-Commanded Stiffness During Bilateral Tele-Impedance. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3047064] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
18
|
Zhang G, Jing W, Tao H, Rahman MA, Salih SQ, Al-Saffar A, Zhang R. ADA-SR: Activity detection and analysis using security robots for reliable workplace safety. Work 2021; 68:935-943. [PMID: 33612535 DOI: 10.3233/wor-203427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND Human-Robot Interaction (HRI) has become a prominent solution to improve the robustness of real-time service provisioning through assisted functions for day-to-day activities. The application of the robotic system in security services helps to improve the precision of event detection and environmental monitoring with ease. OBJECTIVES This paper discusses activity detection and analysis (ADA) using security robots in workplaces. The application scenario of this method relies on processing image and sensor data for event and activity detection. The events that are detected are classified for its abnormality based on the analysis performed using the sensor and image data operated using a convolution neural network. This method aims to improve the accuracy of detection by mitigating the deviations that are classified in different levels of the convolution process. RESULTS The differences are identified based on independent data correlation and information processing. The performance of the proposed method is verified for the three human activities, such as standing, walking, and running, as detected using the images and sensor dataset. CONCLUSION The results are compared with the existing method for metrics accuracy, classification time, and recall.
Collapse
Affiliation(s)
- Guangnan Zhang
- School of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Wang Jing
- School of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Hai Tao
- School of Computer Science, Baoji University of Arts and Sciences, Baoji, China.,Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Universiti Teknologi MARA, Shah Alam, Malaysia
| | - Md Arafatur Rahman
- Faculty of Computing, IBM CoE, and Earth Resources and Sustainability Center, Universiti Malaysia Pahang, Pahang, Malaysia
| | - Sinan Q Salih
- Institute of Research and Development, Duy Tan University, Da Nang, Vietnam
| | - Ahmed Al-Saffar
- Faculty of Computing, IBM CoE, and Earth Resources and Sustainability Center, Universiti Malaysia Pahang, Pahang, Malaysia
| | - Renrui Zhang
- School of Electronics Engineering and Computer Science, Peking University, Beijing, China
| |
Collapse
|
19
|
Jing W, Tao H, Rahman MA, Kabir MN, Yafeng L, Zhang R, Salih SQ, Zain JM. RERS-CC: Robotic facial recognition system for improving the accuracy of human face identification using HRI. Work 2021; 68:923-934. [PMID: 33612534 DOI: 10.3233/wor-203426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND Human-Computer Interaction (HCI) is incorporated with a variety of applications for input processing and response actions. Facial recognition systems in workplaces and security systems help to improve the detection and classification of humans based on the vision experienced by the input system. OBJECTIVES In this manuscript, the Robotic Facial Recognition System using the Compound Classifier (RERS-CC) is introduced to improve the recognition rate of human faces. The process is differentiated into classification, detection, and recognition phases that employ principal component analysis based learning. In this learning process, the errors in image processing based on the extracted different features are used for error classification and accuracy improvements. RESULTS The performance of the proposed RERS-CC is validated experimentally using the input image dataset in MATLAB tool. The performance results show that the proposed method improves detection and recognition accuracy with fewer errors and processing time. CONCLUSION The input image is processed with the knowledge of the features and errors that are observed with different orientations and time instances. With the help of matching dataset and the similarity index verification, the proposed method identifies precise human face with augmented true positives and recognition rate.
Collapse
Affiliation(s)
- Wang Jing
- School of Computer Science, Baoji University of Arts and Sciences, Baoji, China.,Faculty of Computing, IBM CoE, and Earth Resources and Sustainability Center, Universiti Malaysia Pahang, Pahang, Malaysia
| | - Hai Tao
- School of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Md Arafatur Rahman
- Faculty of Computing, IBM CoE, and Earth Resources and Sustainability Center, Universiti Malaysia Pahang, Pahang, Malaysia
| | - Muhammad Nomani Kabir
- Faculty of Computing, IBM CoE, and Earth Resources and Sustainability Center, Universiti Malaysia Pahang, Pahang, Malaysia
| | - Li Yafeng
- School of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Renrui Zhang
- School of Electronics Engineering and Computer Science, Peking University, Beijing, China
| | - Sinan Q Salih
- Institute of Research and Development, Duy Tan University, Da Nang, Vietnam
| | - Jasni Mohamad Zain
- Faculty of Computer and Mathematical Sciences, University Technology MARA, Shah Alam, Malaysia
| |
Collapse
|
20
|
An J, Zhao Y, Lee J. Cooperative Control of Manipulator and Human Operator for Direct Teaching. INT J HUM ROBOT 2021. [DOI: 10.1142/s0219843621500079] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
A cooperative control of a manipulator and a human operator has been proposed for an efficient direct teaching operation in this research. The main goal is making the operator be convenient and relaxed when he is operating the manipulator for a direct teaching. The proposed control strategy has two layers: In the first layer, human motion estimator (HME) has been designed to estimate a human intention. The recursive least square method has been utilized for the HME to simultaneously estimate the interaction force and the human arm admittance model. In the second layer, human motion reactor has been designed to keep the human motion intention precisely by a proportional derivative and gravity compensation in real time. Real experiments with a 3-degree of freedom robotic manipulator guided by the human operator have been conducted to draw a diamond shape on a panel. The experimental results demonstrate the effectiveness of the proposed cooperative control strategy.
Collapse
Affiliation(s)
- Jongwoo An
- Electronics Department, Pusan National University, Busan 46241, South Korea
| | - Youdong Zhao
- Electronics Department, Pusan National University, Busan 46241, South Korea
| | - Jangmyung Lee
- Electronics Department, Pusan National University, Busan 46241, South Korea
| |
Collapse
|
21
|
Atashzar SF, Carriere J, Tavakoli M. Review: How Can Intelligent Robots and Smart Mechatronic Modules Facilitate Remote Assessment, Assistance, and Rehabilitation for Isolated Adults With Neuro-Musculoskeletal Conditions? Front Robot AI 2021; 8:610529. [PMID: 33912593 PMCID: PMC8072151 DOI: 10.3389/frobt.2021.610529] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Accepted: 02/08/2021] [Indexed: 12/12/2022] Open
Abstract
Worldwide, at the time this article was written, there are over 127 million cases of patients with a confirmed link to COVID-19 and about 2.78 million deaths reported. With limited access to vaccine or strong antiviral treatment for the novel coronavirus, actions in terms of prevention and containment of the virus transmission rely mostly on social distancing among susceptible and high-risk populations. Aside from the direct challenges posed by the novel coronavirus pandemic, there are serious and growing secondary consequences caused by the physical distancing and isolation guidelines, among vulnerable populations. Moreover, the healthcare system's resources and capacity have been focused on addressing the COVID-19 pandemic, causing less urgent care, such as physical neurorehabilitation and assessment, to be paused, canceled, or delayed. Overall, this has left elderly adults, in particular those with neuromusculoskeletal (NMSK) conditions, without the required service support. However, in many cases, such as stroke, the available time window of recovery through rehabilitation is limited since neural plasticity decays quickly with time. Given that future waves of the outbreak are expected in the coming months worldwide, it is important to discuss the possibility of using available technologies to address this issue, as societies have a duty to protect the most vulnerable populations. In this perspective review article, we argue that intelligent robotics and wearable technologies can help with remote delivery of assessment, assistance, and rehabilitation services while physical distancing and isolation measures are in place to curtail the spread of the virus. By supporting patients and medical professionals during this pandemic, robots, and smart digital mechatronic systems can reduce the non-COVID-19 burden on healthcare systems. Digital health and cloud telehealth solutions that can complement remote delivery of assessment and physical rehabilitation services will be the subject of discussion in this article due to their potential in enabling more effective and safer NMSDK rehabilitation, assistance, and assessment service delivery. This article will hopefully lead to an interdisciplinary dialogue between the medical and engineering sectors, stake holders, and policy makers for a better delivery of care for those with NMSK conditions during a global health crisis including future pandemics.
Collapse
Affiliation(s)
- S. Farokh Atashzar
- Department of Electrical and Computer Engineering, Department of Mechanical and Aerospace Engineering, New York University, New York, NY, United States
| | - Jay Carriere
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
22
|
Yamakawa Y, Matsui Y, Ishikawa M. Development of a Real-Time Human-Robot Collaborative System Based on 1 kHz Visual Feedback Control and Its Application to a Peg-in-Hole Task. SENSORS 2021; 21:s21020663. [PMID: 33478053 PMCID: PMC7835757 DOI: 10.3390/s21020663] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/15/2021] [Accepted: 01/16/2021] [Indexed: 11/22/2022]
Abstract
In this research, we focused on Human-Robot collaboration. There were two goals: (1) to develop and evaluate a real-time Human-Robot collaborative system, and (2) to achieve concrete tasks such as collaborative peg-in-hole using the developed system. We proposed an algorithm for visual sensing and robot hand control to perform collaborative motion, and we analyzed the stability of the collaborative system and a so-called collaborative error caused by image processing and latency. We achieved collaborative motion using this developed system and evaluated the collaborative error on the basis of the analysis results. Moreover, we aimed to realize a collaborative peg-in-hole task that required a system with high speed and high accuracy. To achieve this goal, we analyzed the conditions required for performing the collaborative peg-in-hole task from the viewpoints of geometric, force and posture conditions. Finally, in this work, we show the experimental results and data of the collaborative peg-in-hole task, and we examine the effectiveness of our collaborative system.
Collapse
Affiliation(s)
- Yuji Yamakawa
- Interfaculty Initiative in Information Studies, The University of Tokyo, Tokyo 153-8505, Japan
- Correspondence: ; Tel.: +81-3-5452-6178
| | - Yutaro Matsui
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan;
| | - Masatoshi Ishikawa
- Information Technology Center, The University of Tokyo, Tokyo 113-8656, Japan;
| |
Collapse
|
23
|
Abu-Dakka FJ, Saveriano M. Variable Impedance Control and Learning-A Review. Front Robot AI 2020; 7:590681. [PMID: 33501348 PMCID: PMC7805898 DOI: 10.3389/frobt.2020.590681] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Accepted: 10/22/2020] [Indexed: 11/13/2022] Open
Abstract
Robots that physically interact with their surroundings, in order to accomplish some tasks or assist humans in their activities, require to exploit contact forces in a safe and proficient manner. Impedance control is considered as a prominent approach in robotics to avoid large impact forces while operating in unstructured environments. In such environments, the conditions under which the interaction occurs may significantly vary during the task execution. This demands robots to be endowed with online adaptation capabilities to cope with sudden and unexpected changes in the environment. In this context, variable impedance control arises as a powerful tool to modulate the robot's behavior in response to variations in its surroundings. In this survey, we present the state-of-the-art of approaches devoted to variable impedance control from control and learning perspectives (separately and jointly). Moreover, we propose a new taxonomy for mechanical impedance based on variability, learning, and control. The objective of this survey is to put together the concepts and efforts that have been done so far in this field, and to describe advantages and disadvantages of each approach. The survey concludes with open issues in the field and an envisioned framework that may potentially solve them.
Collapse
Affiliation(s)
- Fares J. Abu-Dakka
- Intelligent Robotics Group, Department of Electrical Engineering and Automation (EEA), Aalto University, Espoo, Finland
| | - Matteo Saveriano
- Intelligent and Interactive Systems, Department of Computer Science and Digital Science Center (DiSC), University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
24
|
Stouraitis T, Chatzinikolaidis I, Gienger M, Vijayakumar S. Online Hybrid Motion Planning for Dyadic Collaborative Manipulation via Bilevel Optimization. IEEE T ROBOT 2020. [DOI: 10.1109/tro.2020.2992987] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
25
|
Plug-and-play supervisory control using muscle and brain signals for real-time gesture and error detection. Auton Robots 2020. [DOI: 10.1007/s10514-020-09916-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractEffective human supervision of robots can be key for ensuring correct robot operation in a variety of potentially safety-critical scenarios. This paper takes a step towards fast and reliable human intervention in supervisory control tasks by combining two streams of human biosignals: muscle and brain activity acquired via EMG and EEG, respectively. It presents continuous classification of left and right hand-gestures using muscle signals, time-locked classification of error-related potentials using brain signals (unconsciously produced when observing an error), and a framework that combines these pipelines to detect and correct robot mistakes during multiple-choice tasks. The resulting hybrid system is evaluated in a “plug-and-play” fashion with 7 untrained subjects supervising an autonomous robot performing a target selection task. Offline analysis further explores the EMG classification performance, and investigates methods to select subsets of training data that may facilitate generalizable plug-and-play classifiers.
Collapse
|
26
|
Luo J, He W, Yang C. Combined perception, control, and learning for teleoperation: key technologies, applications, and challenges. COGNITIVE COMPUTATION AND SYSTEMS 2020. [DOI: 10.1049/ccs.2020.0005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Affiliation(s)
- Jing Luo
- Key Laboratory of Autonomous Systems and Networked ControlSchool of Automation Science and EngineeringSouth China University of TechnologyGuangzhou510640People's Republic of China
| | - Wei He
- School of Automation and Electrical EngineeringUniversity of Science and Technology BeijingBeijing100083People's Republic of China
| | - Chenguang Yang
- Key Laboratory of Autonomous Systems and Networked ControlSchool of Automation Science and EngineeringSouth China University of TechnologyGuangzhou510640People's Republic of China
| |
Collapse
|
27
|
Shared Haptic Perception for Human-Robot Collaboration. HAPTICS: SCIENCE, TECHNOLOGY, APPLICATIONS 2020. [DOI: 10.1007/978-3-030-58147-3_59] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
28
|
Bioinspired Implementation and Assessment of a Remote-Controlled Robot. Appl Bionics Biomech 2019; 2019:8575607. [PMID: 31611928 PMCID: PMC6755284 DOI: 10.1155/2019/8575607] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 05/09/2019] [Accepted: 08/21/2019] [Indexed: 11/17/2022] Open
Abstract
Daily activities are characterized by an increasing interaction with smart machines that present a certain level of autonomy. However, the intelligence of such electronic devices is not always transparent for the end user. This study is aimed at assessing the quality of the remote control of a mobile robot whether the artefact exhibits a human-like behavior or not. The bioinspired behavior implemented in the robot is the well-described two-thirds power law. The performance of participants who teleoperate the semiautonomous vehicle implementing the biological law is compared to a manual and nonbiological mode of control. The results show that the time required to complete the path and the number of collisions with obstacles are significantly lower in the biological condition than in the two other conditions. Also, the highest percentage of occurrences of curvilinear or smooth trajectories are obtained when the steering is assisted by an integration of the power law in the robot's way of working. This advanced analysis of the performance based on the naturalness of the movement kinematics provides a refined evaluation of the quality of the Human-Machine Interaction (HMI). This finding is consistent with the hypothesis of a relationship between the power law and jerk minimization. In addition, the outcome of this study supports the theory of a CNS origin of the power law. The discussion addresses the implications of the anthropocentric approach to enhance the HMI.
Collapse
|
29
|
Abstract
Telerobotic systems have attracted growing attention because of their superiority in the dangerous or unknown interaction tasks. It is very challenging to exploit such systems to implement complex tasks in an autonomous way. In this paper, we propose a task learning framework to represent the manipulation skill demonstrated by a remotely controlled robot. Gaussian mixture model is utilized to encode and parametrize the smooth task trajectory according to the observations from the demonstrations. After encoding the demonstrated trajectory, a new task trajectory is generated based on the variability information of the learned model. Experimental results have demonstrated the feasibility of the proposed method.
Collapse
Affiliation(s)
- Jing Luo
- Key Laboratory of Autonomous Systems and Networked Control, College of Automation Science and Engineering, South China University of Technology, Guangzhou 510640, P. R. China
| | - Chenguang Yang
- Bristol Robotics Laboratory, University of the West of England, Bristol, UK
| | - Qiang Li
- Neuroinformatics Group, CITEC, Bielefeld University, Bielefeld, Germany
| | - Min Wang
- Key Laboratory of Autonomous Systems and Networked Control, College of Automation Science and Engineering, South China University of Technology, Guangzhou 510640, P. R. China
| |
Collapse
|
30
|
Wu R, Zhang H, Peng T, Fu L, Zhao J. Variable impedance interaction and demonstration interface design based on measurement of arm muscle co-activation for demonstration learning. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.02.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
31
|
Senoo T, Murakami K, Ishikawa M. Deformation Control of a Manipulator Based on the Zener Model. JOURNAL OF ROBOTICS AND MECHATRONICS 2019. [DOI: 10.20965/jrm.2019.p0263] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this study, passive dynamic control of a manipulator is designed and realized. According to the control strategy, the shift in the position and orientation of an end effector attributable to an external force is regarded as deformation of the robot. The Zener model, known as a standard linear solid model, is used to generate the deformable behavior, which describes the combination of plastic and elastic deformation. Based on the relation analysis between the Zener model and two other deformable models, two types of control methods are proposed in terms of the model’s expression. Physical simulations with a robotic arm are executed to validate the proposed control laws.
Collapse
|
32
|
Su H, Enayati N, Vantadori L, Spinoglio A, Ferrigno G, De Momi E. Online human-like redundancy optimization for tele-operated anthropomorphic manipulators. INT J ADV ROBOT SYST 2018. [DOI: 10.1177/1729881418814695] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Robot human-like behavior can enhance the performance of human–robot cooperation with prominently improved natural interaction. This also holds for redundant robots with an anthropomorphic kinematics. In this article, we translated human ability of managing redundancy to control a seven degrees of freedom anthropomorphic robot arm (LWR4+, KUKA, Germany) during tele-operated tasks. We implemented a nonlinear regression method—based on neural networks—between the human arm elbow swivel angle and the hand target pose to achieve an anthropomorphic arm posture during tele-operation tasks. The method was assessed in simulation and experiments were performed with virtual reality tracking tasks in a lab environment. The results showed that the robot achieves a human-like arm posture during tele-operation, and the subjects prefer to work with the biologically inspired robot. The proposed method can be applied in control of anthropomorphic robot manipulators for tele-operated collaborative tasks, such as in factories or in operating rooms.
Collapse
Affiliation(s)
- Hang Su
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Nima Enayati
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Luca Vantadori
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Andrea Spinoglio
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Giancarlo Ferrigno
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
33
|
Peternel L, Tsagarakis N, Caldwell D, Ajoudani A. Robot adaptation to human physical fatigue in human–robot co-manipulation. Auton Robots 2017. [DOI: 10.1007/s10514-017-9678-1] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|