1
|
Konecny J, Beremlijski P, Bailova M, Machacek Z, Koziorek J, Prauzek M. Industrial camera model positioned on an effector for automated tool center point calibration. Sci Rep 2024; 14:323. [PMID: 38172245 PMCID: PMC10764955 DOI: 10.1038/s41598-023-51011-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 12/29/2023] [Indexed: 01/05/2024] Open
Abstract
The study presents a novel, full model of an industrial camera suitable for robotic manipulator tool center point (TCP) calibration. The authors propose a new solution which employs a full camera model positioned on the effector of an industrial robotic arm. The proposed full camera model simulates the capture of a calibration pattern for use in automated TCP calibration. The study describes an experimental test robot stand for producing a reference data set, a full camera model, the parameters of a generally known camera obscura model, and a comparison of proposed solution with the camera obscura model. The results are discussed in the context of an innovative approach which features a full camera model to assist the TCP calibration process. The results showed that the full camera model produced greater accuracy, a significant benefit not provided by other state-of-the-art methods. In several cases, the absolute error produced was up to seven times lower than with the state-of-the-art camera obscura model. The error for small rotation (max. of 5[Formula: see text]) and small translation (max. of 20 mm) was 3.65 pixels. The results also highlighted the applicability of the proposed solution in real-life industrial processes.
Collapse
Affiliation(s)
- Jaromir Konecny
- VSB - Technical University of Ostrava, 17. listopadu 2172/15, Ostrava, 708 00, Czech Republic.
| | - Petr Beremlijski
- VSB - Technical University of Ostrava, 17. listopadu 2172/15, Ostrava, 708 00, Czech Republic
| | - Michaela Bailova
- VSB - Technical University of Ostrava, 17. listopadu 2172/15, Ostrava, 708 00, Czech Republic
| | - Zdenek Machacek
- VSB - Technical University of Ostrava, 17. listopadu 2172/15, Ostrava, 708 00, Czech Republic
| | - Jiri Koziorek
- VSB - Technical University of Ostrava, 17. listopadu 2172/15, Ostrava, 708 00, Czech Republic
| | - Michal Prauzek
- VSB - Technical University of Ostrava, 17. listopadu 2172/15, Ostrava, 708 00, Czech Republic
| |
Collapse
|
2
|
Franco-López A, Maya M, González A, Cardenas A, Piovesan D. Depth-Dependent Control in Vision-Sensor Space for Reconfigurable Parallel Manipulators. SENSORS (BASEL, SWITZERLAND) 2023; 23:7039. [PMID: 37631577 PMCID: PMC10459706 DOI: 10.3390/s23167039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 07/25/2023] [Accepted: 08/02/2023] [Indexed: 08/27/2023]
Abstract
In this paper, a control approach for reconfigurable parallel robots is designed. Based on it, controls in the vision-sensor, 3D and joint spaces are designed and implemented in target tracking tasks in a novel reconfigurable delta-type parallel robot. No a priori information about the target trajectory is required. Robot reconfiguration can be used to overcome some of the limitations of parallel robots like small relative workspace or multiple singularities, at the cost of increasing the complexity of the manipulator, making its control design even more challenging. No general control methodology exists for reconfigurable parallel robots. Tracking objects with unknown trajectories is a challenging task required in many applications. Sensor-based robot control has been actively used for this type of task. However, it cannot be straightforwardly extended to reconfigurable parallel manipulators. The developed vision-sensor space control is inspired by, and can be seen as an extension of, the Velocity Linear Camera Model-Camera Space Manipulation (VLCM-CSM) methodology. Several experiments were carried out on a reconfigurable delta-type parallel robot. An average positioning error of 0.6 mm was obtained for static objectives. Tracking errors of 2.5 mm, 3.9 mm and 11.5 mm were obtained for targets moving along a linear trajectory at speeds of 6.5, 9.3 and 12.7 cm/s, respectively. The control cycle time was 16 ms. These results validate the proposed approach and improve upon previous works for non-reconfigurable robots.
Collapse
Affiliation(s)
- Arturo Franco-López
- Facultad de Ingenieria, Universidad Autonoma de San Luis Potosi, San Luis Potosi 78290, Mexico; (A.F.-L.); (A.C.)
| | - Mauro Maya
- Facultad de Ingenieria, Universidad Autonoma de San Luis Potosi, San Luis Potosi 78290, Mexico; (A.F.-L.); (A.C.)
| | - Alejandro González
- Escuela de Ingeniería y Ciencias, Tecnologico de Monterrey, Querétaro 76130, Mexico;
| | - Antonio Cardenas
- Facultad de Ingenieria, Universidad Autonoma de San Luis Potosi, San Luis Potosi 78290, Mexico; (A.F.-L.); (A.C.)
| | - Davide Piovesan
- Biomedical, Industrial and Systems Engineering Department, Gannon University, Erie, PA 16541, USA;
| |
Collapse
|
3
|
Loredo A, Maya M, González A, Cardenas A, Gonzalez-Galvan E, Piovesan D. A Novel Velocity-Based Control in a Sensor Space for Parallel Manipulators. SENSORS (BASEL, SWITZERLAND) 2022; 22:7323. [PMID: 36236421 PMCID: PMC9571703 DOI: 10.3390/s22197323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 09/14/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
It is a challenging task to track objects moving along an unknown trajectory. Conventional model-based controllers require detailed knowledge of a robot's kinematics and the target's trajectory. Tracking precision heavily relies on kinematics to infer the trajectory. Control implementation in parallel robots is especially difficult due to their complex kinematics. Vision-based controllers are robust to uncertainties of a robot's kinematic model since they can correct end-point trajectories as error estimates become available. Robustness is guaranteed by taking the vision sensor's model into account when designing the control law. All camera space manipulation (CSM) models in the literature are position-based, where the mapping between the end effector position in the Cartesian space and sensor space is established. Such models are not appropriate for tracking moving targets because the relationship between the target and the end effector is a fixed point. The present work builds upon the literature by presenting a novel CSM velocity-based control that establishes a relationship between a movable trajectory and the end effector position. Its efficacy is shown on a Delta-type parallel robot. Three types of experiments were performed: (a) static tracking (average error of 1.09 mm); (b) constant speed linear trajectory tracking-speeds of 7, 9.5, and 12 cm/s-(tracking errors of 8.89, 11.76, and 18.65 mm, respectively); (c) freehand trajectory tracking (max tracking errors of 11.79 mm during motion and max static positioning errors of 1.44 mm once the object stopped). The resulting control cycle time was 48 ms. The results obtained show a reduction in the tracking errors for this robot with respect to previously published control strategies.
Collapse
Affiliation(s)
- Antonio Loredo
- Facultad de Ingeniería, Universidad Autónoma de San Luis Potosí, San Luis Potosi 78290, Mexico
| | - Mauro Maya
- Facultad de Ingeniería, Universidad Autónoma de San Luis Potosí, San Luis Potosi 78290, Mexico
| | - Alejandro González
- Tecnologico de Monterrey, Escuela de Ingenieria y Ciencias, Queretaro 76130, Mexico
| | - Antonio Cardenas
- Facultad de Ingeniería, Universidad Autónoma de San Luis Potosí, San Luis Potosi 78290, Mexico
| | - Emilio Gonzalez-Galvan
- Facultad de Ingeniería, Universidad Autónoma de San Luis Potosí, San Luis Potosi 78290, Mexico
| | - Davide Piovesan
- Biomedical Engineering, Gannon University, Erie, PA 16541, USA
| |
Collapse
|
4
|
An Embedded Quaternion-Based Extended Kalman Filter Pose Estimation for Six Degrees of Freedom Systems. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01377-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
5
|
González A, Gonzalez-Galvan EJ, Maya M, Cardenas A, Piovesan D. Estimation of camera-space manipulation parameters by means of an extended Kalman filter: Applications to parallel robots. INT J ADV ROBOT SYST 2019. [DOI: 10.1177/1729881419842987] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Parallel robots have a growing range of applications due to their appealing characteristics (high speed and acceleration, increased rigidity, etc.). However, several open problems make it difficult to model and control them. Low computational-cost algorithms are needed for high speed tasks where high accelerations are required. This article develops the nonlinear camera-space manipulation method and makes use of an extended Kalman filter (EKF) for the estimation of the camera-space manipulation parameters. This is presented as an alternative to the traditional method which can be time consuming while reaching convergence. The proposed camera-space manipulation parameter identification was performed in positioning tasks for a parallel manipulator and the experimental results are reported. Results show that it is possible to estimate the set of camera-space manipulation parameters by means of an extended Kalman filter. Using the proposed Kalman filter method we observed a significant reduction of the computational effort when estimating the camera-space manipulation parameters. However, there was no significant reduction of the robot’s positioning error. The proposed extended Kalman filter implementation requires only 2 ms to update the camera-space manipulation parameters compared to the 85 ms required by the traditional camera-space manipulation algorithm. Such time reduction is beneficial for the implementation of the method for a wide range of high speed and industrial applications. This article presents a novel use of an extended Kalman filter for the real-time estimation of the camera-space manipulation parameters and shows that it can be used to increase the positioning accuracy of a parallel robot.
Collapse
Affiliation(s)
- Alejandro González
- Faculty of Engineering, CONACyT-Universidad Auténoma de San Luis Potosi, San Luis Potosi, Mexico
| | | | - Mauro Maya
- Faculty of Engineering, Universidad Auténoma de San Luis Potosi, San Luis Potosi, Mexico
| | - Antonio Cardenas
- Faculty of Engineering, Universidad Auténoma de San Luis Potosi, San Luis Potosi, Mexico
| | - Davide Piovesan
- Biomedical, Industrial and Systems Engineering Department, Ganon University, PA, USA
| |
Collapse
|
6
|
Coronado E, Maya M, Cardenas A, Guarneros O, Piovesan D. Vision-based Control of a Delta Parallel Robot via Linear Camera-Space Manipulation. J INTELL ROBOT SYST 2016. [DOI: 10.1007/s10846-016-0413-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
7
|
Chavez-Romero R, Cardenas A, Manuel Rendon-Mancha J, Vernaza KM, Piovesan D. Inexpensive Vision-Based System for the Direct Measurement of Ankle Stiffness During Quiet Standing. J Med Device 2015. [DOI: 10.1115/1.4031060] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
We created a sensor-fusion suite for the acquisition of biometric information that can be used for the estimation of human control strategy in a variety of everyday tasks. This work focuses on the experimental validation of the integrated motion capture subsystem based on raster images. Understanding human control strategies utilized in everyday activity requires measurement of several variables that can be grouped as kinematic, dynamic, and biological-feedback variables. Hence, there is a strong need for the acquisition, analysis, and synchronization of the information measured by a variety of transducers. Our system was able to capture the complex dynamics of a flexible robot by means of two inexpensive web cameras without compromising accuracy. After validating the vision system by means of the robotic device, a direct measure of the center of gravity (COG) position during the recovery from a fall was performed on two groups of human subjects separated by age. The instrumental setup was used to estimate how ankle operational stiffness changes as function of age. The results indicate a statistical increase of stiffness for the older group.
Collapse
Affiliation(s)
- Raul Chavez-Romero
- Unidad Académica de Ingeniería I, Programa de Ingeniería Mecánica, Universidad Autónoma de Zacatecas, Jardín Juárez #147, Zacatecas 98000, México e-mail:
| | - Antonio Cardenas
- Facultad de Ingeniería, Centro de Investigación y Estudios de Posgrado, Universidad Autónoma de San Luis Potosí, Avenue Dr. Manuel Nava #9, San Luis Potosí 78290, México e-mail:
| | - Juan Manuel Rendon-Mancha
- Departamento de Computación, Universidad Autónoma del Estado de Morelos, Avenue Universidad #1001, Cuernavaca, Morelos 62209, México e-mail:
| | - Karinna M. Vernaza
- Department of Mechanical Engineering, Gannon University, 109 University Square, Erie, PA 16541-0001 e-mail:
| | - Davide Piovesan
- Biomedical Engineering Program, Department of Mechanical Engineering, Gannon University, 109 University Square, PMB 3251, Erie, PA 16541-0001 e-mail:
| |
Collapse
|
8
|
Martinez-Gomez J, Fernandez-Caballero A, Garcia-Varea I, Rodriguez L, Romero-Gonzalez C. A Taxonomy of Vision Systems for Ground Mobile Robots. INT J ADV ROBOT SYST 2014. [DOI: 10.5772/58900] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
This paper introduces a taxonomy of vision systems for ground mobile robots. In the last five years, a significant number of relevant papers have contributed to this subject. Firstly, a thorough review of the papers is proposed to discuss and classify both past and the most current approaches in the field. As a result, a global picture of the state of the art of the last five years is obtained. Moreover, the study of the articles is used to put forward a comprehensive taxonomy based on the most up-to-date research in ground mobile robotics. In this sense, the paper aims at being especially helpful to both budding and experienced researchers in the areas of vision systems and mobile ground robots. The taxonomy described is devised from a novel perspective, namely in order to respond to the main questions posed when designing robotic vision systems: why?, what for?, what with?, how?, and where? The answers are derived from the most relevant techniques described in the recent literature, leading in a natural way to a series of classifications that are discussed and contextualized. The article offers a global picture of the state of the art in the area and discovers some promising research lines.
Collapse
Affiliation(s)
- Jesus Martinez-Gomez
- Universidad de Castilla-La Mancha, Departamento de Sistemas Informaticos, Albacete, Spain
| | | | - Ismael Garcia-Varea
- Universidad de Castilla-La Mancha, Departamento de Sistemas Informaticos, Albacete, Spain
| | - Luis Rodriguez
- Universidad de Castilla-La Mancha, Departamento de Sistemas Informaticos, Albacete, Spain
| | | |
Collapse
|