1
|
Functionalities, Benchmarking System and Performance Evaluation for a Domestic Service Robot: People Perception, People Following, and Pick and Placing. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12104819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
This paper describes the development of three main functionalities for a domestic mobile service robot and an automatic benchmarking system used for the systematic performance evaluation of the robot’s functionalities. Three main robot functionalities are addressed: (1) People Perception, (2) People Following and (3) Pick and Placing, where the hardware and software systems developed for each functionality are described and demonstrated on an actual mobile service robot, with the goal of providing assistance to an elderly person inside the house. Furthermore, a set of innovative benchmarks and an automatic performance evaluation system are proposed and used to evaluate the performance of the developed functionalities. These benchmarks are now made publicly available and is part of the European Robotics League (ERL)-Consumer to systematically evaluate the performance of service robot solutions at different testbeds around Europe.
Collapse
|
2
|
Kamtikar S, Marri S, Walt B, Uppalapati NK, Krishnan G, Chowdhary G. Visual Servoing for Pose Control of Soft Continuum Arm in a Structured Environment. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3155821] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
3
|
Classical and Deep Learning based Visual Servoing Systems: a Survey on State of the Art. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01540-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
4
|
Zhou P, Zhu J, Huo S, Navarro-Alarcon D. LaSeSOM: A Latent and Semantic Representation Framework for Soft Object Manipulation. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3074872] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
5
|
Caron G. Defocus-Based Direct Visual Servoing. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3067845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
6
|
Image Guided Visual Tracking Control System for Unmanned Multirotor Aerial Vehicle with Uncertainty. ROBOTICS 2020. [DOI: 10.3390/robotics9040103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
This paper presents a wavelet-based image guided tracking control system for unmanned multirotor aerial vehicle system with the presence of uncertainty. The visual signals for the visual tracking process are developed by using wavelet coefficients. The design uses a multiresolution interaction matrix with half and details images to relate the time-variation of wavelet coefficients with the velocity of the aerial vehicle and controller. The proposed design is evaluated on a virtual quadrotor aerial vehicle system to demonstrate the effectiveness of the wavelet-based visual tracking system without using an image processing unit in the presence of uncertainty. In contrast to the classical visual tracking technique, the wavelet-based method does not require an image processing task.
Collapse
|
7
|
Sahu UK, Patra D, Subudhi B. Vision‐based tip position tracking control of two‐link flexible manipulator. IET CYBER-SYSTEMS AND ROBOTICS 2020. [DOI: 10.1049/iet-csr.2019.0035] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Umesh Kumar Sahu
- Department of Electrical Engineering National Institute of Technology Rourkela 769008 India
- Department of Electronics and Telecommunication G H Raisoni College of Engineering Nagpur Maharashtra 440016 India
| | - Dipti Patra
- Department of Electrical Engineering National Institute of Technology Rourkela 769008 India
| | - Bidyadhar Subudhi
- Department of Electrical Engineering National Institute of Technology Rourkela 769008 India
- School of Electrical Sciences Indian Institute of Technology Goa Ponda Goa 401403 India
| |
Collapse
|
8
|
|
9
|
Abstract
This paper proposes a trifocal tensor-based approach for six-degree-of-freedom visual servoing. The trifocal tensor model among the current, desired, and reference views is constructed to describe the geometric relationship of the system. More precisely, to ensure the computation consistency of trifocal tensor, a virtual reference view is introduced by exploiting the transfer relationships between the initial and desired images. Instead of resorting to explicit estimation of the camera pose, a set of visual features with satisfactory decoupling properties are constructed from the tensor elements. Based on the selected features, a visual controller is developed to regulate the camera to a desired pose, and an adaptive update law is used to compensate for the unknown distance scale factor. Furthermore, the system stability is analyzed via Lyapunov-based techniques, showing that the proposed controller can achieve almost global asymptotic stability. Both simulation and experimental results are provided to demonstrate the effectiveness and robustness of our approach under different conditions and case studies.
Collapse
Affiliation(s)
- Kaixiang Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | | | - Jian Chen
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| |
Collapse
|
10
|
Li B, Zhang X, Fang Y, Shi W. Visual Servoing of Wheeled Mobile Robots Without Desired Images. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:2835-2844. [PMID: 29994554 DOI: 10.1109/tcyb.2018.2828333] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper proposes a novel monocular visual servoing strategy, which can drive a wheeled mobile robot to the desired pose without a prerecorded desired image. Compared with existing methods that adopt the teaching pattern for visual regulation, this scheme can still work well in the situation that the desired image has not been previously acquired. Thus, with the aid of this method, it is more convenient for mobile robots to execute visual servoing tasks. Specifically, to deal with nonexistence of the desired image, the reference frame is craftily defined by taking advantage of visual targets and the planar motion constraint, and the pose estimation algorithm is designed for the mobile robot with respect to the reference frame. Then, an adaptive visual regulation controller is developed to drive the mobile robot to the intermediate frame, where the parameter updating law is constructed for the unknown feature height based on the concurrent learning framework. Stability analysis shows that regulation errors and height identification error can converge simultaneously. Afterwards, the mobile robot is driven to the metric desired pose with the identified feature height. Both simulation and experimental results are provided to validate the performance of this strategy.
Collapse
|
11
|
|
12
|
Crombez N, Mouaddib EM, Caron G, Chaumette F. Visual Servoing With Photometric Gaussian Mixtures as Dense Features. IEEE T ROBOT 2019. [DOI: 10.1109/tro.2018.2876765] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
13
|
Bakthavatchalam M, Tahri O, Chaumette F. A Direct Dense Visual Servoing Approach Using Photometric Moments. IEEE T ROBOT 2018. [DOI: 10.1109/tro.2018.2830379] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
14
|
Kudryavtsev AV, Chikhaoui MT, Liadov A, Rougeot P, Spindler F, Rabenorosoa K, Burgner-Kahrs J, Tamadazte B, Andreff N. Eye-in-Hand Visual Servoing of Concentric Tube Robots. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2807592] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
15
|
Duflot LA, Reisenhofer R, Tamadazte B, Andreff N, Krupa A. Wavelet and shearlet-based image representations for visual servoing. Int J Rob Res 2018. [DOI: 10.1177/0278364918769739] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A visual servoing scheme consists of a closed-loop control approach that uses visual information feedback to control the motion of a robotic system. Probably the most popular visual servoing method is image-based visual servoing (IBVS). This kind of method uses geometric visual features extracted from the image to design the control law. However, extracting, matching, and tracking geometric visual features over time significantly limits the versatility of visual servoing controllers in various industrial and medical applications, in particular for “low-structured” medical images, e.g. ultrasounds and optical coherence tomography modalities. To overcome the limits of conventional IBVS, one can consider novel visual servoing paradigms known as “ direct” or “ featureless” approaches. This paper deals with the development of a new generation of direct visual servoing methods in which the signal control inputs are the coefficients of a multiscale image representation. In particular, we consider the use of multiscale image representations that are based on discrete wavelet and shearlet transforms. Up to now, one of the main obstacles in the investigation of multiscale image representations for visual servoing schemes was the issue of obtaining an analytical formulation of the interaction matrix that links the variation of wavelet and shearlet coefficients to the spatial velocity of the camera and the robot. In this paper, we derive four direct visual servoing controllers: two that are based on subsampled respectively non-subsampled wavelet coefficients and two that are based on the coefficients of subsampled respectively non-subsampled discrete shearlet transforms. All proposed controllers were tested in both simulation and experimental scenarios (using a six-degree-of-freedom Cartesian robot in an eye-in-hand configuration). The objective of this paper is to provide an analysis of the respective strengths and weaknesses of wavelet- and shearlet-based visual servoing controllers.
Collapse
Affiliation(s)
- Lesley-Ann Duflot
- Université Rennes, Inria, CNRS, IRISA, Rennes, France
- FEMTO-ST, AS2M, Université Bourgogne Franche-Comté, Besançon, France
| | | | - Brahim Tamadazte
- FEMTO-ST, AS2M, Université Bourgogne Franche-Comté, Besançon, France
| | - Nicolas Andreff
- FEMTO-ST, AS2M, Université Bourgogne Franche-Comté, Besançon, France
| | | |
Collapse
|
16
|
Castelli F, Michieletto S, Ghidoni S, Pagello E. A machine learning-based visual servoing approach for fast robot control in industrial setting. INT J ADV ROBOT SYST 2017. [DOI: 10.1177/1729881417738884] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Affiliation(s)
- Francesco Castelli
- Department of Information Engineering (DEI), Intelligent Autonomous Systems Lab (IAS-Lab), University of Padova, Padua, Italy
| | - Stefano Michieletto
- Department of Information Engineering (DEI), Intelligent Autonomous Systems Lab (IAS-Lab), University of Padova, Padua, Italy
| | - Stefano Ghidoni
- Department of Information Engineering (DEI), Intelligent Autonomous Systems Lab (IAS-Lab), University of Padova, Padua, Italy
| | - Enrico Pagello
- Department of Information Engineering (DEI), Intelligent Autonomous Systems Lab (IAS-Lab), University of Padova, Padua, Italy
| |
Collapse
|
17
|
Chen J, Jia B, Zhang K. Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:3784-3798. [PMID: 27390199 DOI: 10.1109/tcyb.2016.2582210] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.
Collapse
|
18
|
|
19
|
Máthé K, Buşoniu L. Vision and Control for UAVs: A Survey of General Methods and of Inexpensive Platforms for Infrastructure Inspection. SENSORS 2015; 15:14887-916. [PMID: 26121608 PMCID: PMC4541813 DOI: 10.3390/s150714887] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2014] [Revised: 05/20/2015] [Accepted: 05/26/2015] [Indexed: 11/18/2022]
Abstract
Unmanned aerial vehicles (UAVs) have gained significant attention in recent years. Low-cost platforms using inexpensive sensor payloads have been shown to provide satisfactory flight and navigation capabilities. In this report, we survey vision and control methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. We also highlight the sensor suites used where this information is available. We overview, among others, feature detection and tracking, optical flow and visual servoing, low-level stabilization and high-level planning methods. We then list popular low-cost UAVs, selecting mainly quadrotors. We discuss applications, restricting our focus to the field of infrastructure inspection. Finally, as an example, we formulate two use-cases for railway inspection, a less explored application field, and illustrate the usage of the vision and control techniques reviewed by selecting appropriate ones to tackle these use-cases. To select vision methods, we run a thorough set of experimental evaluations.
Collapse
Affiliation(s)
- Koppány Máthé
- Automation Department, Technical University of Cluj-Napoca, Memorandumului Street no. 28, 400114 Cluj-Napoca, Romania.
| | - Lucian Buşoniu
- Automation Department, Technical University of Cluj-Napoca, Memorandumului Street no. 28, 400114 Cluj-Napoca, Romania.
| |
Collapse
|
20
|
|
21
|
|
22
|
Zille P, Corpetti T, Shao L, Chen X. Observation model based on scale interactions for optical flow estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:3281-3293. [PMID: 24968405 DOI: 10.1109/tip.2014.2328893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, an original observation model for multiresolution optical flow estimation is introduced. Multiresolution frameworks, often based on coarse-to-fine warping strategies, are widely used by state-of-the-art optical flow methods. They allow the recovery of large motions by successive estimations of the flow field at several resolution levels. Although such approaches perform very efficiently and usually lead to faster minimizations, they generally consider independent problems at each resolution levels and do not exploit the existing interactions between scales (especially the influences of fine scales on larger ones). In this paper, we tackle this issue by proposing a flexible framework, inspired from fluid mechanics, able to partly counter these limitations. For each resolution level, our process filters the equations of interest and decomposes the key variables into resolved (i.e., at a given resolution) and unresolved (i.e., at finer resolutions) components. This enables to derive a new data term that takes into account, at coarse resolutions, the influence of their unresolved parts. From this new term, we propose two different estimation strategies, depending on whether we explicitly know the type of relations between the different scales (as for physical processes) or not. In order to test the efficiency of this new observation model, we have embedded it in a simple multiresolution Lucas-Kanade estimator. Comparing the usual optical flow constraint equation with this new term in the same motion estimation procedure, it clearly appears that the proposed term leads to more consistent estimates and prevents from errors propagation apparition during the estimation. In all situations (synthetic, real, physical images or not), our new term is able to greatly improve the results compared with usual conservation constraints.
Collapse
|
23
|
Hoermann S, Borges PVK. Vehicle Localization and Classification Using Off-Board Vision and 3-D Models. IEEE T ROBOT 2014. [DOI: 10.1109/tro.2013.2291613] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
24
|
Song B, Zhao J, Xi N, Chen H, Lai KWC, Yang R, Chen L. Compressive Feedback-Based Motion Control for Nanomanipulation—Theory and Applications. IEEE T ROBOT 2014. [DOI: 10.1109/tro.2013.2291619] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
25
|
Nadeau C, Krupa A. Intensity-Based Ultrasound Visual Servoing: Modeling and Validation With 2-D and 3-D Probes. IEEE T ROBOT 2013. [DOI: 10.1109/tro.2013.2256690] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
26
|
|
27
|
Abstract
In minimally invasive surgery or needle insertion procedures, the ultrasound imaging can easily and safely be used to visualize the target to reach. However the manual stabilization of the view of this target, which undergoes the physiological motions of the patient, can be a challenge for the surgeon. In this paper, we propose to perform this stabilization with a robotic arm equipped with a 2D ultrasound probe. The six degrees of freedom of the probe are controlled by an image-based approach, where we choose as visual feedback the image intensity. The accuracy of the control law is ensured by the consideration of the periodicity of the physiological motions in a predictive controller. Tracking tasks performed on a realistic abdominal phantom validate the proposed approach and its robustness to deformation is assessed on a gelatin-made deformable phantom.
Collapse
|
28
|
|