1
|
Guan B, Zhao J, Li Z, Sun F, Fraundorfer F. Relative Pose Estimation With a Single Affine Correspondence. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:10111-10122. [PMID: 33909576 DOI: 10.1109/tcyb.2021.3069806] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this article, we present four cases of minimal solutions for two-view relative pose estimation by exploiting the affine transformation between feature points, and we demonstrate efficient solvers for these cases. It is shown that under the planar motion assumption or with knowledge of a vertical direction, a single affine correspondence is sufficient to recover the relative camera pose. The four cases considered are two-view planar relative motion for calibrated cameras as a closed-form and least-squares solutions, a closed-form solution for unknown focal length, and the case of a known vertical direction. These algorithms can be used efficiently for outlier detection within a RANSAC loop and for initial motion estimation. All the methods are evaluated on both synthetic data and real-world datasets. The experimental results demonstrate that our methods outperform comparable state-of-the-art methods in accuracy with the benefit of a reduced number of needed RANSAC iterations. The source code is released at https://github.com/jizhaox/relative_pose_from_affine.
Collapse
|
2
|
Li P, Li C, Bore JC, Si Y, Li F, Cao Z, Zhang Y, Wang G, Zhang Z, Yao D, Xu P. L1-norm based time-varying brain neural network and its application to dynamic analysis for motor imagery. J Neural Eng 2022; 19. [PMID: 35234668 DOI: 10.1088/1741-2552/ac59a4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 03/01/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE EEG-based motor imagery (MI) brain-computer interface offers a promising way to improve the efficiency of motor rehabilitation and motor skill learning. In recent years, the power of dynamic network analysis for MI classification has been proved. In fact, its usability mainly depends on the accurate estimation of brain connection. However, traditional dynamic network estimation strategies such as adaptive directed transfer function (ADTF) are designed in the L2-norm. Usually, they estimate a series of pseudo connections caused by outliers, which results in biased features and further limits its online application. Thus, how to accurately infer dynamic causal relationship under outlier influence is urgent. APPROACH In this work, we proposed a novel ADTF, which solves the dynamic system in the L1-norm space (L1-ADTF), so as to restrict the outlier influence. To enhance its convergence, we designed an iteration strategy with the alternating direction method of multipliers (ADMM), which could be used for the solution of the dynamic state-space model restricted in the L1-norm space. Furthermore, we compared L1-ADTF to traditional ADTF and its dual extension across both simulation and real EEG experiments. MAIN RESULTS A quantitative comparison between L1-ADTF and other ADTFs in simulation studies demonstrates that fewer bias errors and more desirable dynamic state transformation patterns can be captured by the L1-ADTF. Application to real MI EEG datasets seriously noised by ocular artifacts also reveals the efficiency of the proposed L1-ADTF approach to extract the time-varying brain neural network patterns, even when more complex noises are involved. SIGNIFICANCE The L1-ADTF may not only be capable of tracking time-varying brain network state drifts robustly but may also be useful in solving a wide range of dynamic systems such as trajectory tracking problems and dynamic neural networks.
Collapse
Affiliation(s)
- Peiyang Li
- School of Bioinformatics, Chongqing University of Posts and Telecommunications, NO.2,Chongwen Road,Nan'an District, Chongqing, China, Chongqing, 400065, CHINA
| | - Cunbo Li
- University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan, China, Chengdu, 611731, CHINA
| | - Joyce Chelangat Bore
- University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan, China, Chengdu, 611731, CHINA
| | - Yajing Si
- Department of Psychology, Xinxiang Medical University, No. 601, Jinsui Avenue, Hongqi District, Xinxiang, Henan, 453003, CHINA
| | - Fali Li
- Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan, China, Chengdu, 610054, CHINA
| | - Zehong Cao
- University of South Australia, Adelaide, SA 5095, Australia, Adelaide, South Australia, 5001, AUSTRALIA
| | - Yangsong Zhang
- Southwest University of Science and Technology, 59 Qinglong Road, Mianyang,Sichuan, P.R.China, Mianyang, 621010, CHINA
| | - Gang Wang
- University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan, China, Chengdu, 610054, CHINA
| | - Zhijun Zhang
- South China University of Technology, 777 Xingye Avenue East, Panyu District, Guangzhou, Guangzhou, 510640, CHINA
| | - Dezhong Yao
- University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan, China, Chengdu, 611731, CHINA
| | - Peng Xu
- University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan, China, Chengdu, 611731, CHINA
| |
Collapse
|
3
|
Huang C, Mei P, Wang J. Event-triggering robust fusion estimation for a class of multi-rate systems subject to censored observations. ISA TRANSACTIONS 2021; 110:28-38. [PMID: 33268109 DOI: 10.1016/j.isatra.2020.10.038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 10/07/2020] [Accepted: 10/10/2020] [Indexed: 06/12/2023]
Abstract
This novel is concerned with the event-triggering robust fusion estimation problem for multi-rate systems (MRSs) subject to stochastic nonlinearities (SNs) and censored observations (COs). The considered multi-rate system includes several sensor nodes, and each sensor is with different sampling rate. To reflect the dead-zone-like censoring phenomenon, a Tobit-1 regression model with prescribed left-censoring threshold is introduced, and the stochastic nonlinearities characterized by statistical means are considered in the MRSs. In order to save the limited resource, the event-triggering mechanism (ETM) has been introduced to determine whether the specified sensor node should transmit the information to the corresponding local filter. For the addressed MRSs, we aim to design a local Tobit Kalman filtering (TKF) algorithm for each sensor node firstly in the sense of the upper bound on each local filtering error covariance being minimal. Then, such a minimized upper bound is derived by designing the filter gain properly at each iteration. In the sequel, the fusion centre manipulates the local estimates by the CI scheme. Moreover, we discuss the issue of consistency for the proposed multi-rate fusion estimation (MRFE) approach. At last, experimental simulation are exploited to demonstrate the validation of the designed MRFE algorithm.
Collapse
Affiliation(s)
- Cong Huang
- School of Information Science and Technology, Donghua University, Shanghai 201620, China; Department of Mechanical Engineering, Politecnicodi Milano, Milan 20156, Italy.
| | - Peng Mei
- School of Transportation Science and Engineering, Beihang University, Beijing 100191, China; Department of Mechanical Engineering, Politecnicodi Milano, Milan 20156, Italy.
| | - Jun Wang
- School of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China.
| |
Collapse
|
4
|
A Novel Camera Fusion Method Based on Switching Scheme and Occlusion-Aware Object Detection for Real-Time Robotic Grasping. J INTELL ROBOT SYST 2020. [DOI: 10.1007/s10846-020-01236-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
5
|
Hodges J, Attia T, Arukgoda J, Kang C, Cowden M, Doan L, Ranasinghe R, Abdelatty K, Dissanayake G, Furukawa T. Multistage bayesian autonomy for high-precision operation in a large field. J FIELD ROBOT 2018. [DOI: 10.1002/rob.21829] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Jonathan Hodges
- Department of Mechanical Engineering; Virginia Tech; Blacksburg Virginia
| | - Tamer Attia
- Department of Mechanical Engineering; Virginia Tech; Blacksburg Virginia
| | - Janindu Arukgoda
- Centre for Autonomous Systems; University of Technology; Sydney NSW Australia
| | - Changkoo Kang
- Department of Aerospace and Ocean Engineering; Virginia Tech; Blacksburg Virginia
| | - Mickey Cowden
- Department of Electrical and Computer Engineering; Virginia Tech; Blacksburg Virginia
| | - Luan Doan
- Department of Mechanical Engineering; Virginia Tech; Blacksburg Virginia
| | - Ravindra Ranasinghe
- Centre for Autonomous Systems; University of Technology; Sydney NSW Australia
| | - Karim Abdelatty
- Department of Mechanical Engineering; Virginia Tech; Blacksburg Virginia
| | - Gamini Dissanayake
- Centre for Autonomous Systems; University of Technology; Sydney NSW Australia
| | - Tomonari Furukawa
- Department of Mechanical Engineering; Virginia Tech; Blacksburg Virginia
| |
Collapse
|
6
|
Huo J, Zhang G, Yang M. Algorithm for pose estimation based on objective function with uncertainty-weighted measuring error of feature point cling to the curved surface. APPLIED OPTICS 2018; 57:3306-3315. [PMID: 29714321 DOI: 10.1364/ao.57.003306] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Accepted: 03/22/2018] [Indexed: 06/08/2023]
Abstract
This paper is concerned with the anisotropic and non-identical gray distribution of feature points clinging to the curved surface, upon which a high precision and uncertainty-resistance algorithm for pose estimation is proposed. Weighted contribution of uncertainty to the objective function of feature points measuring error is analyzed. Then a novel error objective function based on the spatial collinear error is constructed by transforming the uncertainty into a covariance-weighted matrix, which is suitable for the practical applications. Further, the optimized generalized orthogonal iterative (GOI) algorithm is utilized for iterative solutions such that it avoids the poor convergence and significantly resists the uncertainty. Hence, the optimized GOI algorithm extends the field-of-view applications and improves the accuracy and robustness of the measuring results by the redundant information. Finally, simulation and practical experiments show that the maximum error of re-projection image coordinates of the target is less than 0.110 pixels. Within the space 3000 mm×3000 mm×4000 mm, the maximum estimation errors of static and dynamic measurement for rocket nozzle motion are superior to 0.065° and 0.128°, respectively. Results verify the high accuracy and uncertainty attenuation performance of the proposed approach and should therefore have potential for engineering applications.
Collapse
|
7
|
Li L, Liu YH, Jiang T, Wang K, Fang M. Adaptive Trajectory Tracking of Nonholonomic Mobile Robots Using Vision-Based Position and Velocity Estimation. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:571-582. [PMID: 28092594 DOI: 10.1109/tcyb.2016.2646719] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Despite tremendous efforts made for years, trajectory tracking control (TC) of a nonholonomic mobile robot (NMR) without global positioning system remains an open problem. The major reason is the difficulty to localize the robot by using its onboard sensors only. In this paper, a newly designed adaptive trajectory TC method is proposed for the NMR without its position, orientation, and velocity measurements. The controller is designed on the basis of a novel algorithm to estimate position and velocity of the robot online from visual feedback of an omnidirectional camera. It is theoretically proved that the proposed algorithm yields the TC errors to asymptotically converge to zero. Real-world experiments are conducted on a wheeled NMR to validate the feasibility of the control system.
Collapse
|
8
|
Ma X, Feng J, Li Y, Tan J. Active 6-D position-pose estimation of a spatial circle using monocular eye-in-hand system. INT J ADV ROBOT SYST 2018. [DOI: 10.1177/1729881417753692] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Nuts and bolts are common components in assembly lines. Their position and pose estimation is a vital step for automatic assembling. Although many approaches using a monocular camera have been proposed, few works consider a monocular camera’s active movements for improving estimation accuracy. This article presents an active movement strategy for a monocular eye-in-hand camera for high position and pose estimation accuracy of a spatial circle. Extensive experiments are conducted to validate the effectiveness of the proposed method for position and pose estimation of circles printed on paper, real circular flat washers, and nuts.
Collapse
Affiliation(s)
- Xin Ma
- School of Control Science and Engineering, Shandong University, Shandong, China
| | - Junbing Feng
- School of Control Science and Engineering, Shandong University, Shandong, China
| | - Yibin Li
- School of Control Science and Engineering, Shandong University, Shandong, China
| | - Jindong Tan
- Department of Mechanical, Aerospace, and Biomedical Engineering, University of Tennessee, Knoxville, USA
| |
Collapse
|
9
|
Chen J, Jia B, Zhang K. Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:3784-3798. [PMID: 27390199 DOI: 10.1109/tcyb.2016.2582210] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.
Collapse
|
10
|
Jiang P, Cheng Y, Wang X, Feng Z. Unfalsified Visual Servoing for Simultaneous Object Recognition and Pose Tracking. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:3032-3046. [PMID: 27723610 DOI: 10.1109/tcyb.2015.2495157] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In a complex environment, simultaneous object recognition and tracking has been one of the challenging topics in computer vision and robotics. Current approaches are usually fragile due to spurious feature matching and local convergence for pose determination. Once a failure happens, these approaches lack a mechanism to recover automatically. In this paper, data-driven unfalsified control is proposed for solving this problem in visual servoing. It recognizes a target through matching image features with a 3-D model and then tracks them through dynamic visual servoing. The features can be falsified or unfalsified by a supervisory mechanism according to their tracking performance. Supervisory visual servoing is repeated until a consensus between the model and the selected features is reached, so that model recognition and object tracking are accomplished. Experiments show the effectiveness and robustness of the proposed algorithm to deal with matching and tracking failures caused by various disturbances, such as fast motion, occlusions, and illumination variation.
Collapse
|
11
|
A Three-Dimensional Shape-Based Force and Stiffness-Sensing Platform for Tendon-Driven Catheters. SENSORS 2016; 16:s16070990. [PMID: 27367685 PMCID: PMC4970041 DOI: 10.3390/s16070990] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Revised: 04/29/2016] [Accepted: 05/24/2016] [Indexed: 11/16/2022]
Abstract
This paper presents an efficient shape-based three-axial force and stiffness estimator for active catheters commonly implemented in cardiac ablation. The force-sensing capability provides important feedback for catheterization procedures including real-time control and catheter steering in autonomous navigation systems. The proposed platform is based on the introduced accurate and computationally efficient Cosserat rod model for tendon-driven catheters. The proposed nonlinear Kalman filter formulation for contact force estimation along with the developed catheter model provides a real-time force observer robust to nonlinearities and noise covariance uncertainties. Furthermore, the proposed platform enables stiffness estimation in addition to tip contact force sensing in different operational circumstances. The approach incorporates pose measurements which can be achieved using currently developed pose-sensing systems or imaging techniques. The method makes the approach compatible with the range of forces applied in clinical applications. The simulation and experimental results verify the viability of the introduced force and stiffness-sensing technique.
Collapse
|
12
|
Liang M, Min H, Luo R, Zhu J. Simultaneous Recognition and Modeling for Learning 3-D Object Models From Everyday Scenes. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:2237-2248. [PMID: 25423666 DOI: 10.1109/tcyb.2014.2368127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Object recognition and modeling have classically been studied separately, but practically, they are two closely correlated aspects. In this paper, by exploring the interrelations, we propose a framework to address these two problems at the same time, which we call simultaneous recognition and modeling. Differing from traditional recognition process which consists of off-line object model learning and on-line recognition procedures, our method is solely online. Starting with an empty object database, we incrementally build up object models while at the same time using these models to identify newly observed object views. In the proposed framework, objects are modeled as view graphs and a probabilistic observation model is presented. Both the appearance and the spatial structure of the object are examined, and a formulation based on maximum likelihood estimation is developed. Joint object recognition and modeling are achieved by solving the optimization problem. To evaluate the framework, we have developed a method for simultaneously learning multiple 3-D object models directly from the cluttered indoor environment and tested it using several everyday scenes. Experimental results demonstrate that the framework can cope with the recognition and modeling problem together nicely.
Collapse
|
13
|
Li L, Liu YH, Wang K, Fang M. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:1633-1646. [PMID: 25265622 DOI: 10.1109/tcyb.2014.2357797] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.
Collapse
|
14
|
Ko NY, Kuc TY. Fusing range measurements from ultrasonic beacons and a laser range finder for localization of a mobile robot. SENSORS 2015; 15:11050-75. [PMID: 25970259 PMCID: PMC4481944 DOI: 10.3390/s150511050] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Revised: 04/26/2015] [Accepted: 05/05/2015] [Indexed: 11/16/2022]
Abstract
This paper proposes a method for mobile robot localization in a partially unknown indoor environment. The method fuses two types of range measurements: the range from the robot to the beacons measured by ultrasonic sensors and the range from the robot to the walls surrounding the robot measured by a laser range finder (LRF). For the fusion, the unscented Kalman filter (UKF) is utilized. Because finding the Jacobian matrix is not feasible for range measurement using an LRF, UKF has an advantage in this situation over the extended KF. The locations of the beacons and range data from the beacons are available, whereas the correspondence of the range data to the beacon is not given. Therefore, the proposed method also deals with the problem of data association to determine which beacon corresponds to the given range data. The proposed approach is evaluated using different sets of design parameter values and is compared with the method that uses only an LRF or ultrasonic beacons. Comparative analysis shows that even though ultrasonic beacons are sparsely populated, have a large error and have a slow update rate, they improve the localization performance when fused with the LRF measurement. In addition, proper adjustment of the UKF design parameters is crucial for full utilization of the UKF approach for sensor fusion. This study contributes to the derivation of a UKF-based design methodology to fuse two exteroceptive measurements that are complementary to each other in localization.
Collapse
Affiliation(s)
- Nak Yong Ko
- Department of Electronics Engineering, Chosun University, 375 Seosuk-dong Dong-gu, Gwangju 501-759, Korea.
| | - Tae-Yong Kuc
- College of Information and Communication Engineering, Sungkyunkwan University, 300 Cheoncheon-dong Jangan-gu Suwon, Gyeonggi-do 440-746, Korea.
| |
Collapse
|