1
|
Pearl O, Shin S, Godura A, Bergbreiter S, Halilaj E. Fusion of video and inertial sensing data via dynamic optimization of a biomechanical model. J Biomech 2023; 155:111617. [PMID: 37220709 DOI: 10.1016/j.jbiomech.2023.111617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 04/26/2023] [Accepted: 05/02/2023] [Indexed: 05/25/2023]
Abstract
Inertial sensing and computer vision are promising alternatives to traditional optical motion tracking, but until now these data sources have been explored either in isolation or fused via unconstrained optimization, which may not take full advantage of their complementary strengths. By adding physiological plausibility and dynamical robustness to a proposed solution, biomechanical modeling may enable better fusion than unconstrained optimization. To test this hypothesis, we fused video and inertial sensing data via dynamic optimization with a nine degree-of-freedom model and investigated when this approach outperforms video-only, inertial-sensing-only, and unconstrained-fusion methods. We used both experimental and synthetic data that mimicked different ranges of video and inertial measurement unit (IMU) data noise. Fusion with a dynamically constrained model significantly improved estimation of lower-extremity kinematics over the video-only approach and estimation of joint centers over the IMU-only approach. It consistently outperformed single-modality approaches across different noise profiles. When the quality of video data was high and that of inertial data was low, dynamically constrained fusion improved estimation of joint kinematics and joint centers over unconstrained fusion, while unconstrained fusion was advantageous in the opposite scenario. These findings indicate that complementary modalities and techniques can improve motion tracking by clinically meaningful margins and that data quality and computational complexity must be considered when selecting the most appropriate method for a particular application.
Collapse
Affiliation(s)
- Owen Pearl
- Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Soyong Shin
- Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Ashwin Godura
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Sarah Bergbreiter
- Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Eni Halilaj
- Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
2
|
Li F, Chen J, Ye G, Dong S, Gao Z, Zhou Y. Soft Robotic Glove with Sensing and Force Feedback for Rehabilitation in Virtual Reality. Biomimetics (Basel) 2023; 8:biomimetics8010083. [PMID: 36810414 PMCID: PMC9944851 DOI: 10.3390/biomimetics8010083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 02/01/2023] [Accepted: 02/12/2023] [Indexed: 02/17/2023] Open
Abstract
Many diseases, such as stroke, arthritis, and spinal cord injury, can cause severe hand impairment. Treatment options for these patients are limited by expensive hand rehabilitation devices and dull treatment procedures. In this study, we present an inexpensive soft robotic glove for hand rehabilitation in virtual reality (VR). Fifteen inertial measurement units are placed on the glove for finger motion tracking, and a motor-tendon actuation system is mounted onto the arm and exerts forces on fingertips via finger-anchoring points, providing force feedback to fingers so that the users can feel the force of a virtual object. A static threshold correction and complementary filter are used to calculate the finger attitude angles, hence computing the postures of five fingers simultaneously. Both static and dynamic tests are performed to validate the accuracy of the finger-motion-tracking algorithm. A field-oriented-control-based angular closed-loop torque control algorithm is adopted to control the force applied to the fingers. It is found that each motor can provide a maximum force of 3.14 N within the tested current limit. Finally, we present an application of the haptic glove in a Unity-based VR interface to provide the operator with haptic feedback while squeezing a soft virtual ball.
Collapse
|
3
|
Lee J, Hanley D, Bretl T. Extrinsic Calibration of Multiple Inertial Sensors From Arbitrary Trajectories. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3143290] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
4
|
Adaptive transfer alignment method based on the observability analysis for airborne pod strapdown inertial navigation system. Sci Rep 2022; 12:946. [PMID: 35042924 PMCID: PMC8766501 DOI: 10.1038/s41598-021-04732-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 12/28/2021] [Indexed: 12/03/2022] Open
Abstract
For the airborne pod strapdown inertial navigation system, it is necessary to use the host aircraft's inertial navigation system for the transfer alignment as quickly and accurately as possible in the flight process of the aircraft. The purpose of this paper is to propose an adaptive transfer alignment method based on the observability analysis for the strapdown inertial navigation system, which is able to meet the practical need of maintaining the navigation accuracy of the airborne pod. The observability of each state variable is obtained by observability analysis of system state variables. According to the weight of the observability, a transfer alignment filter algorithm based on adaptive adjustment factor is constructed to reduce the influence of weak observability state variables on the whole filter, which can improve the estimation accuracy of transfer alignment. Simulations and experiment tests of the airborne pod and the master strapdown inertial navigation systems show that the adaptive transfer alignment method based on the observability analysis can overcome the shortage of the weak observability state variables, so as to improve the alignment and the navigation performance in practical applications, thus improving the adaptability of the airborne pod.
Collapse
|
5
|
Sung C, Jeon S, Lim H, Myung H. What if there was no revisit? Large-scale graph-based SLAM with traffic sign detection in an HD map using LiDAR inertial odometry. INTEL SERV ROBOT 2021. [DOI: 10.1007/s11370-021-00395-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
6
|
Hansen LH, Fleck P, Stranner M, Schmalstieg D, Arth C. Augmented Reality for Subsurface Utility Engineering, Revisited. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4119-4128. [PMID: 34449372 DOI: 10.1109/tvcg.2021.3106479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Civil engineering is a primary domain for new augmented reality technologies. In this work, the area of subsurface utility engineering is revisited, and new methods tackling well-known, yet unsolved problems are presented. We describe our solution to the outdoor localization problem, which is deemed one of the most critical issues in outdoor augmented reality, proposing a novel, lightweight hardware platform to generate highly accurate position and orientation estimates in a global context. Furthermore, we present new approaches to drastically improve realism of outdoor data visualizations. First, a novel method to replace physical spray markings by indistinguishable virtual counterparts is described. Second, the visualization of 3D reconstructions of real excavations is presented, fusing seamlessly with the view onto the real environment. We demonstrate the power of these new methods on a set of different outdoor scenarios.
Collapse
|
7
|
Eckenhoff K, Geneva P, Huang G. MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera Visual-Inertial Navigation System. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3049445] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
An Embedded Quaternion-Based Extended Kalman Filter Pose Estimation for Six Degrees of Freedom Systems. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01377-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
9
|
Huang W, Wan W, Liu H. Optimization-Based Online Initialization and Calibration of Monocular Visual-Inertial Odometry Considering Spatial-Temporal Constraints. SENSORS 2021; 21:s21082673. [PMID: 33920218 PMCID: PMC8070556 DOI: 10.3390/s21082673] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 03/23/2021] [Accepted: 04/06/2021] [Indexed: 11/25/2022]
Abstract
The online system state initialization and simultaneous spatial-temporal calibration are critical for monocular Visual-Inertial Odometry (VIO) since these parameters are either not well provided or even unknown. Although impressive performance has been achieved, most of the existing methods are designed for filter-based VIOs. For the optimization-based VIOs, there is not much online spatial-temporal calibration method in the literature. In this paper, we propose an optimization-based online initialization and spatial-temporal calibration method for VIO. The method does not need any prior knowledge about spatial and temporal configurations. It estimates the initial states of metric-scale, velocity, gravity, Inertial Measurement Unit (IMU) biases, and calibrates the coordinate transformation and time offsets between the camera and IMU sensors. The work routine of the method is as follows. First, it uses a time offset model and two short-term motion interpolation algorithms to align and interpolate the camera and IMU measurement data. Then, the aligned and interpolated results are sent to an incremental estimator to estimate the initial states and the spatial–temporal parameters. After that, a bundle adjustment is additionally included to improve the accuracy of the estimated results. Experiments using both synthetic and public datasets are performed to examine the performance of the proposed method. The results show that both the initial states and the spatial-temporal parameters can be well estimated. The method outperforms other contemporary methods used for comparison.
Collapse
Affiliation(s)
- Weibo Huang
- Key Laboratory of Machine Perception, Peking University Shenzhen Graduate School, Shenzhen 518055, China;
| | - Weiwei Wan
- School of Engineering Science, Osaka University, Osaka 5608531, Japan
- Correspondence: (W.W.); (H.L.)
| | - Hong Liu
- Key Laboratory of Machine Perception, Peking University Shenzhen Graduate School, Shenzhen 518055, China;
- Correspondence: (W.W.); (H.L.)
| |
Collapse
|
10
|
A Novel IMU Extrinsic Calibration Method for Mass Production Land Vehicles. SENSORS 2020; 21:s21010007. [PMID: 33374942 PMCID: PMC7792609 DOI: 10.3390/s21010007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 12/13/2020] [Accepted: 12/16/2020] [Indexed: 11/17/2022]
Abstract
Multi-modal sensor fusion has become ubiquitous in the field of vehicle motion estimation. Achieving a consistent sensor fusion in such a set-up demands the precise knowledge of the misalignments between the coordinate systems in which the different information sources are expressed. In ego-motion estimation, even sub-degree misalignment errors lead to serious performance degradation. The present work addresses the extrinsic calibration of a land vehicle equipped with standard production car sensors and an automotive-grade inertial measurement unit (IMU). Specifically, the article presents a method for the estimation of the misalignment between the IMU and vehicle coordinate systems, while considering the IMU biases. The estimation problem is treated as a joint state and parameter estimation problem, and solved using an adaptive estimator that relies on the IMU measurements, a dynamic single-track model as well as the suspension and odometry systems. Additionally, we show that the validity of the misalignment estimates can be assessed by identifying the misalignment between a high-precision INS/GNSS and the IMU and vehicle coordinate systems. The effectiveness of the proposed calibration procedure is demonstrated using real sensor data. The results show that estimation accuracies below 0.1 degrees can be achieved in spite of moderate variations in the manoeuvre execution.
Collapse
|
11
|
Monocular Visual SLAM Based on a Cooperative UAV-Target System. SENSORS 2020; 20:s20123531. [PMID: 32580347 PMCID: PMC7378774 DOI: 10.3390/s20123531] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 06/13/2020] [Accepted: 06/18/2020] [Indexed: 11/17/2022]
Abstract
To obtain autonomy in applications that involve Unmanned Aerial Vehicles (UAVs), the capacity of self-location and perception of the operational environment is a fundamental requirement. To this effect, GPS represents the typical solution for determining the position of a UAV operating in outdoor and open environments. On the other hand, GPS cannot be a reliable solution for a different kind of environments like cluttered and indoor ones. In this scenario, a good alternative is represented by the monocular SLAM (Simultaneous Localization and Mapping) methods. A monocular SLAM system allows a UAV to operate in a priori unknown environment using an onboard camera to simultaneously build a map of its surroundings while at the same time locates itself respect to this map. So, given the problem of an aerial robot that must follow a free-moving cooperative target in a GPS denied environment, this work presents a monocular-based SLAM approach for cooperative UAV-Target systems that addresses the state estimation problem of (i) the UAV position and velocity, (ii) the target position and velocity, (iii) the landmarks positions (map). The proposed monocular SLAM system incorporates altitude measurements obtained from an altimeter. In this case, an observability analysis is carried out to show that the observability properties of the system are improved by incorporating altitude measurements. Furthermore, a novel technique to estimate the approximate depth of the new visual landmarks is proposed, which takes advantage of the cooperative target. Additionally, a control system is proposed for maintaining a stable flight formation of the UAV with respect to the target. In this case, the stability of control laws is proved using the Lyapunov theory. The experimental results obtained from real data as well as the results obtained from computer simulations show that the proposed scheme can provide good performance.
Collapse
|
12
|
Yan X, Guo H, Yu M, Xu Y, Cheng L, Jiang P. Light detection and ranging/inertial measurement unit-integrated navigation positioning for indoor mobile robots. INT J ADV ROBOT SYST 2020. [DOI: 10.1177/1729881420919940] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
To overcome the problem of the low accuracy and large accumulated errors of indoor mobile navigation and positioning, a method to integrate the light detection and ranging- and inertial measurement unit-based measurement is proposed. Firstly, the voxel-scale-invariant feature transform feature extraction algorithm for light detection and ranging is studied. Then, the errors of light detection and ranging measurement due to the change of the scan plane compensated based on aiding information from inertial measurement unit. The relative position parameters and the differences of measurements of light detection and ranging at adjacent times are used to estimate the error of inertial measurement unit sensors by a Kalman filter. Several experiments are carried out in the indoor corridor and the results demonstrate that precision of the light detection and ranging/inertial measurement unit-integrated indoor mobile robot localization is higher than that of single light detection and ranging sensor.
Collapse
Affiliation(s)
- Xiaoyi Yan
- Institute of Space Science and Technology, Nanchang University, Nanchang, China
| | - Hang Guo
- Institute of Space Science and Technology, Nanchang University, Nanchang, China
| | - Min Yu
- College of Computer Information Engineering, Jiangxi Normal University, Nanchang, China
| | - Yuan Xu
- School of Electrical Engineering, University of Jinan, Jinan, China
| | - Liang Cheng
- Institute of Space Science and Technology, Nanchang University, Nanchang, China
| | - Ping Jiang
- Institute of Space Science and Technology, Nanchang University, Nanchang, China
| |
Collapse
|
13
|
Stančin S, Tomažič S. Computationally Efficient 3D Orientation Tracking Using Gyroscope Measurements. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20082240. [PMID: 32326632 PMCID: PMC7218895 DOI: 10.3390/s20082240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 04/07/2020] [Accepted: 04/13/2020] [Indexed: 06/11/2023]
Abstract
Computationally efficient 3D orientation (3DO) tracking using gyroscope angular velocity measurements enables a short execution time and low energy consumption for the computing device. These are essential requirements in today's wearable device environments, which are characterized by limited resources and demands for high energy autonomy. We show that the computational efficiency of 3DO tracking is significantly improved by correctly interpreting each triplet of gyroscope measurements as simultaneous (using the rotation vector called the Simultaneous Orthogonal Rotation Angle, or SORA) rather than as sequential (using Euler angles) rotation. For an example rotation of 90°, depending on the change in the rotation axis, using Euler angles requires 35 to 78 times more measurement steps for comparable levels of accuracy, implying a higher sampling frequency and computational complexity. In general, the higher the demanded 3DO accuracy, the higher the computational advantage of using the SORA. Furthermore, we demonstrate that 12 to 14 times faster execution is achieved by adapting the SORA-based 3DO tracking to the architecture of the executing low-power ARM Cortex® M0+ microcontroller using only integer arithmetic, lookup tables, and the small-angle approximation. Finally, we show that the computational efficiency is further improved by choosing the appropriate 3DO computational method. Using rotation matrices is 1.85 times faster than using rotation quaternions when 3DO calculations are performed for each measurement step. On the other hand, using rotation quaternions is 1.75 times faster when only the final 3DO result of several consecutive rotations is needed. We conclude that by adopting the presented practices, the clock frequency of a processor computing the 3DO can be significantly reduced. This substantially prolongs the energy autonomy of the device and enhances its usability in day-to-day measurement scenarios.
Collapse
|
14
|
Joukov V, Cesic J, Westermann K, Markovic I, Petrovic I, Kulic D. Estimation and Observability Analysis of Human Motion on Lie Groups. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:1321-1332. [PMID: 31567105 DOI: 10.1109/tcyb.2019.2933390] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article proposes a framework for human-pose estimation from the wearable sensors that rely on a Lie group representation to model the geometry of the human movement. Human body joints are modeled by matrix Lie groups, using special orthogonal groups SO(2) and SO(3) for joint pose and special Euclidean group SE(3) for base-link pose representation. To estimate the human joint pose, velocity, and acceleration, we develop the equations for employing the extended Kalman filter on Lie groups (LG-EKF) to explicitly account for the non-Euclidean geometry of the state space. We present the observability analysis of an arbitrarily long kinematic chain of SO(3) elements based on a differential geometric approach, representing a generalization of kinematic chains of a human body. The observability is investigated for the system using marker position measurements. The proposed algorithm is compared with two competing approaches: 1) the extended Kalman filter (EKF) and 2) unscented KF (UKF) based on the Euler angle parametrization, in both simulations and extensive real-world experiments. The results show that the proposed approach achieves significant improvements over the Euler angle-based filters. It provides more accurate pose estimates, is not sensitive to gimbal lock, and more consistently estimates the covariances.
Collapse
|
15
|
Park HS, Shi J. Force from Motion: Decoding Control Force of Activity in a First-Person Video. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:622-635. [PMID: 30489262 DOI: 10.1109/tpami.2018.2883327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A first-person video delivers what the camera wearer (actor) experiences through physical interactions with surroundings. In this paper, we focus on a problem of Force from Motion-estimating the active force and torque exerted by the actor to drive her/his activity-from a first-person video. We use two physical cues inherited in the first-person video. (1) Ego-motion: the camera motion is generated by a resultant of force interactions, which allows us to understand the effect of the active force using Newtonian mechanics. (2) Visual semantics: the first-person visual scene is deployed to afford the actor's activity, which is indicative of the physical context of the activity. We estimate the active force and torque using a dynamical system that can describe the transition (dynamics) of the actor's physical state (position, orientation, and linear/angular momentum) where the latent physical state is indirectly observed by the first-person video. We approximate the physical state with the 3D camera trajectory that is reconstructed up to scale and orientation. The absolute scale factor and gravitation field are learned from the ego-motion and visual semantics of the first-person video. Inspired by an optimal control theory, we solve the dynamical system by minimizing reprojection error. Our method shows quantitatively equivalent reconstruction comparing to IMU measurements in terms of gravity and scale recovery and outperforms the methods based on 2D optical flow for an active action recognition task. We apply our method to first-person videos of mountain biking, urban bike racing, skiing, speedflying with parachute, and wingsuit flying where inertial measurements are not accessible.
Collapse
|
16
|
Moving Object Detection from Moving Camera Image Sequences Using an Inertial Measurement Unit Sensor. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app10010268] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper describes a new method for the detection of moving objects from moving camera image sequences using an inertial measurement unit (IMU) sensor. Motion detection systems with vision sensors have become a global research subject recently. However, detecting moving objects from a moving camera is a difficult task because of egomotion. In the proposed method, the interesting points are extracted by a Harris detector, and the background and foreground are classified by epipolar geometry. In this procedure, an IMU sensor is used to calculate the initial fundamental matrix. After the feature point classification, a transformation matrix is obtained from matching background feature points. Image registration is then applied to the consecutive images, and a difference map is extracted to find the foreground region. Finally, a minimum bounding box is applied to mark the detected moving object. The proposed method is implemented and tested with numerous real-world driving videos, which show that it outperforms the previous work.
Collapse
|
17
|
Feng Z, Li J, Zhang L, Chen C. Online Spatial and Temporal Calibration for Monocular Direct Visual-Inertial Odometry. SENSORS 2019; 19:s19102273. [PMID: 31100933 PMCID: PMC6567321 DOI: 10.3390/s19102273] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Revised: 05/12/2019] [Accepted: 05/14/2019] [Indexed: 11/16/2022]
Abstract
Owing to the nonlinearity in visual-inertial state estimation, sufficiently accurate initial states, especially the spatial and temporal parameters between IMU (Inertial Measurement Unit) and camera, should be provided to avoid divergence. Moreover, these parameters are required to be calibrated online since they are likely to vary once the mechanical configuration slightly changes. Recently, direct approaches have gained popularity for their better performance than feature-based approaches in little-texture or low-illumination environments, taking advantage of tracking pixels directly. Based on these considerations, we perform a direct version of monocular VIO (Visual-inertial Odometry), and propose a novel approach to initialize the spatial-temporal parameters and estimate them with all other variables of interest (IMU pose, point inverse depth, etc.). We highlight that our approach is able to perform robust and accurate initialization and online calibration for the spatial and temporal parameters without utilizing any prior information, and also achieves high-precision estimates even when large temporal offset occurs. The performance of the proposed approach was verified through the public UAV (Unmanned Aerial Vehicle) dataset.
Collapse
Affiliation(s)
- Zheyu Feng
- Information Engineering University, Zhengzhou 450001, China.
| | - Jianwen Li
- Information Engineering University, Zhengzhou 450001, China.
| | - Lundong Zhang
- Information Engineering University, Zhengzhou 450001, China.
| | - Chen Chen
- Information Engineering University, Zhengzhou 450001, China.
| |
Collapse
|
18
|
Yang Y, Geneva P, Eckenhoff K, Huang G. Degenerate Motion Analysis for Aided INS With Online Spatial and Temporal Sensor Calibration. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2893803] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
19
|
Bai H, Taylor CN. Control-enabled Observability and Sensitivity Functions in Visual-Inertial Odometry. J INTELL ROBOT SYST 2019. [DOI: 10.1007/s10846-018-0808-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
20
|
Wang D, Chen H, Yang H, Xue S. Research on intelligent parking system algorithm based on camera calibration model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2018. [DOI: 10.3233/jifs-169629] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- D. Wang
- School of Highway, Chang’an University, Shaanxi, China
| | - H. Chen
- School of Highway, Chang’an University, Shaanxi, China
| | - H. Yang
- Shaanxi Institute of Urban and Rural Planning and Design, China
| | - S. Xue
- School of Highway, Chang’an University, Shaanxi, China
| |
Collapse
|
21
|
Lee CR, Yoon JH, Yoon KJ. Calibration and Noise Identification of a Rolling Shutter Camera and a Low-Cost Inertial Measurement Unit. SENSORS (BASEL, SWITZERLAND) 2018; 18:s18072345. [PMID: 30029509 PMCID: PMC6069048 DOI: 10.3390/s18072345] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 06/25/2018] [Accepted: 07/10/2018] [Indexed: 06/08/2023]
Abstract
A low-cost inertial measurement unit (IMU) and a rolling shutter camera form a conventional device configuration for localization of a mobile platform due to their complementary properties and low costs. This paper proposes a new calibration method that jointly estimates calibration and noise parameters of the low-cost IMU and the rolling shutter camera for effective sensor fusion in which accurate sensor calibration is very critical. Based on the graybox system identification, the proposed method estimates unknown noise density so that we can minimize calibration error and its covariance by using the unscented Kalman filter. Then, we refine the estimated calibration parameters with the estimated noise density in batch manner. Experimental results on synthetic and real data demonstrate the accuracy and stability of the proposed method and show that the proposed method provides consistent results even with unknown noise density of the IMU. Furthermore, a real experiment using a commercial smartphone validates the performance of the proposed calibration method in off-the-shelf devices.
Collapse
Affiliation(s)
- Chang-Ryeol Lee
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology (GIST), Gwangju 61005, Korea.
| | - Ju Hong Yoon
- Korea Electronics Technology Institute (KETI), Seongnam-si 13509, Korea.
| | - Kuk-Jin Yoon
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea.
| |
Collapse
|
22
|
Tang H, Liu Y, Wang H. Constraint Gaussian Filter With Virtual Measurement for On-Line Camera-Odometry Calibration. IEEE T ROBOT 2018. [DOI: 10.1109/tro.2018.2805312] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
23
|
Deilamsalehy H, Havens TC. Fuzzy adaptive extended Kalman filter for robot 3D pose estimation. INTERNATIONAL JOURNAL OF INTELLIGENT UNMANNED SYSTEMS 2018. [DOI: 10.1108/ijius-12-2017-0014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping, and medical applications such as robotic surgery. The purpose of this paper is to introduce a novel method to fuse the information from several available sensors in order to improve the estimated pose from any individual sensor and calculate a more accurate pose for the moving platform.
Design/methodology/approach
Pose estimation is usually done by collecting the data obtained from several sensors mounted on the object/platform and fusing the acquired information. Assuming that the robot is moving in a three-dimensional (3D) world, its location is completely defined by six degrees of freedom (6DOF): three angles and three position coordinates. Some 3D sensors, such as IMUs and cameras, have been widely used for 3D localization. Yet, there are other sensors, like 2D Light Detection And Ranging (LiDAR), which can give a very precise estimation in a 2D plane but they are not employed for 3D estimation since the sensor is unable to obtain the full 6DOF. However, in some applications there is a considerable amount of time in which the robot is almost moving on a plane during the time interval between two sensor readings; e.g., a ground vehicle moving on a flat surface or a drone flying at an almost constant altitude to collect visual data. In this paper a novel method using a “fuzzy inference system” is proposed that employs a 2D LiDAR in a 3D localization algorithm in order to improve the pose estimation accuracy.
Findings
The method determines the trajectory of the robot and the sensor reliability between two readings and based on this information defines the weight of the 2D sensor in the final fused pose by adjusting “extended Kalman filter” parameters. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.
Originality/value
To the best of the authors’ knowledge this is the first time that a 2D LiDAR has been employed to improve the 3D pose estimation in an unknown environment without any previous knowledge. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.
Collapse
|
24
|
State-of-the-Art Mobile Intelligence: Enabling Robots to Move Like Humans by Estimating Mobility with Artificial Intelligence. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8030379] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
25
|
Monocular SLAM System for MAVs Aided with Altitude and Range Measurements: a GPS-free Approach. J INTELL ROBOT SYST 2018. [DOI: 10.1007/s10846-018-0775-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
26
|
Deng W, Papavasileiou I, Qiao Z, Zhang W, Lam KY, Han S. Advances in Automation Technologies for Lower Extremity Neurorehabilitation: A Review and Future Challenges. IEEE Rev Biomed Eng 2018; 11:289-305. [DOI: 10.1109/rbme.2018.2830805] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
27
|
|
28
|
Yang D, Sun D, Liu Y, Liao S. Sensor to sensor calibration of the integrated INS/vision navigation system. INT J ADV ROBOT SYST 2017. [DOI: 10.1177/1729881417707322] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Affiliation(s)
- Dongfang Yang
- Control Engineering department, Xi’an High-Tech Research Institution, Xi’an, People’s Republic of China
| | - Dawei Sun
- Control Engineering department, Xi’an High-Tech Research Institution, Xi’an, People’s Republic of China
| | - Yang Liu
- Control Engineering department, Xi’an High-Tech Research Institution, Xi’an, People’s Republic of China
| | - Shouyi Liao
- Control Engineering department, Xi’an High-Tech Research Institution, Xi’an, People’s Republic of China
| |
Collapse
|
29
|
Park Y, Choi Y, Seo Y. Globally optimal camera-and-rotation-sensor calibration with a branch-and-bound algorithm. APPLIED OPTICS 2017; 56:3462-3469. [PMID: 28430214 DOI: 10.1364/ao.56.003462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper introduces a globally optimal algorithm for obtaining the rotational displacement between the coordinate frames of a rotation sensor and a camera that are rigidly attached. Our method minimizes the geometrically meaningful error using a branch-and-bound algorithm to find the global solution. For this, we derive a bounding inequality and corresponding feasibility problem for a top-down efficient search over the rotation space to minimize the L1-, L2-, or L∞-norm error function. Experiments are performed with synthetic and real data sets to show the efficacy of the algorithm.
Collapse
|
30
|
Wang Z, Jin B, Geng W. Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones. SENSORS 2017; 17:s17040806. [PMID: 28397765 PMCID: PMC5422167 DOI: 10.3390/s17040806] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 03/13/2017] [Accepted: 04/02/2017] [Indexed: 12/03/2022]
Abstract
The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry.
Collapse
Affiliation(s)
- Zhen Wang
- College of Computer Science and Technology, Zhejiang University, Zhejiang 310000, China.
| | - Bingwen Jin
- College of Computer Science and Technology, Zhejiang University, Zhejiang 310000, China.
| | - Weidong Geng
- College of Computer Science and Technology, Zhejiang University, Zhejiang 310000, China.
| |
Collapse
|
31
|
Munguia R, Urzua S, Grau A. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles. PLoS One 2016; 11:e0167197. [PMID: 28033385 PMCID: PMC5198979 DOI: 10.1371/journal.pone.0167197] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2016] [Accepted: 11/10/2016] [Indexed: 11/18/2022] Open
Abstract
In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.
Collapse
Affiliation(s)
- Rodrigo Munguia
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara, México
- * E-mail: (RM); (AG)
| | - Sarquis Urzua
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara, México
| | - Antoni Grau
- Automatic Control Dept, Technical University of Catalonia, 08034 Barcelona, Spain
- * E-mail: (RM); (AG)
| |
Collapse
|
32
|
Xian Z, Lian J, Shan M, Zhang L, He X, Hu X. A square root unscented Kalman filter for multiple view geometry based stereo cameras/inertial navigation. INT J ADV ROBOT SYST 2016. [DOI: 10.1177/1729881416664850] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Exact motion estimation is one of the major tasks in autonomous navigation. Conventional Global Positioning System-aided inertial navigation systems are able to provide accurate locations. However, they are limited when used in a Global Positioning System-denied environment. In this paper, we present a square root unscented Kalman filter-based approach for navigation by using stereo cameras and an inertial sensor only. The main contribution of this work is the development of a novel measurement model by applying multiple view geometry constraints to the stereo cameras/inertial system. The measurement model does not require the three-dimensional feature position in the state vector of the filter, which substantially reduces the size of the state vector and the computational burden. To incorporate this nonlinear and complex measurement model, a variant of the square root unscented Kalman filter-based algorithm is also proposed. The root of the state covariance is propagated and updated directly in the square root unscented Kalman filter, thereby avoiding the decomposition of the state covariance and improving the stability of our algorithm. Experimental results based on a real outdoor dataset are presented to demonstrate the feasibility and the accuracy of the proposed approach.
Collapse
Affiliation(s)
- Zhiwen Xian
- College of Mechatronics and Automation, National University of Defense Technology, China
| | - Junxiang Lian
- College of Mechatronics and Automation, National University of Defense Technology, China
| | - Mao Shan
- Australian Centre for Field Robotics, The University of Sydney, Australia
| | - Lilian Zhang
- College of Mechatronics and Automation, National University of Defense Technology, China
| | - Xiaofeng He
- College of Mechatronics and Automation, National University of Defense Technology, China
| | - Xiaoping Hu
- College of Mechatronics and Automation, National University of Defense Technology, China
| |
Collapse
|
33
|
Abstract
SUMMARYWe propose a novel stereo visual IMU-assisted (Inertial Measurement Unit) technique that extends to large inter-frame motion the use of KLT tracker (Kanade–Lucas–Tomasi). The constrained and coherent inter-frame motion acquired from the IMU is applied to detected features through homogenous transform using 3D geometry and stereoscopy properties. This predicts efficiently the projection of the optical flow in subsequent images. Accurate adaptive tracking windows limit tracking areas resulting in a minimum of lost features and also prevent tracking of dynamic objects. This new feature tracking approach is adopted as part of a fast and robust visual odometry algorithm based on double dogleg trust region method. Comparisons with gyro-aided KLT and variants approaches show that our technique is able to maintain minimum loss of features and low computational cost even on image sequences presenting important scale change. Visual odometry solution based on this IMU-assisted KLT gives more accurate result than INS/GPS solution for trajectory generation in certain context.
Collapse
|
34
|
Gao S, Liu Y, Wang J, Deng W, Oh H. The Joint Adaptive Kalman Filter (JAKF) for Vehicle Motion State Estimation. SENSORS 2016; 16:s16071103. [PMID: 27438835 PMCID: PMC4970148 DOI: 10.3390/s16071103] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2016] [Revised: 07/06/2016] [Accepted: 07/11/2016] [Indexed: 11/16/2022]
Abstract
This paper proposes a multi-sensory Joint Adaptive Kalman Filter (JAKF) through extending innovation-based adaptive estimation (IAE) to estimate the motion state of the moving vehicles ahead. JAKF views Lidar and Radar data as the source of the local filters, which aims to adaptively adjust the measurement noise variance-covariance (V-C) matrix ‘R’ and the system noise V-C matrix ‘Q’. Then, the global filter uses R to calculate the information allocation factor ‘β’ for data fusion. Finally, the global filter completes optimal data fusion and feeds back to the local filters to improve the measurement accuracy of the local filters. Extensive simulation and experimental results show that the JAKF has better adaptive ability and fault tolerance. JAKF enables one to bridge the gap of the accuracy difference of various sensors to improve the integral filtering effectivity. If any sensor breaks down, the filtered results of JAKF still can maintain a stable convergence rate. Moreover, the JAKF outperforms the conventional Kalman filter (CKF) and the innovation-based adaptive Kalman filter (IAKF) with respect to the accuracy of displacement, velocity, and acceleration, respectively.
Collapse
Affiliation(s)
- Siwei Gao
- College of Computer Science and Technology, Jilin University, Changchun 130012, China.
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China.
| | - Yanheng Liu
- College of Computer Science and Technology, Jilin University, Changchun 130012, China.
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China.
- State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130012, China.
| | - Jian Wang
- College of Computer Science and Technology, Jilin University, Changchun 130012, China.
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China.
- State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130012, China.
- Department of Computer Science and Engineering, Hanyang University, Ansan 426791, Korea.
| | - Weiwen Deng
- State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130012, China.
| | - Heekuck Oh
- Department of Computer Science and Engineering, Hanyang University, Ansan 426791, Korea.
| |
Collapse
|
35
|
Rehder J, Siegwart R, Furgale P. A General Approach to Spatiotemporal Calibration in Multisensor Systems. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2529645] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
36
|
Vision-Based SLAM System for Unmanned Aerial Vehicles. SENSORS 2016; 16:s16030372. [PMID: 26999131 PMCID: PMC4813947 DOI: 10.3390/s16030372] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 03/07/2016] [Accepted: 03/09/2016] [Indexed: 11/24/2022]
Abstract
The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.
Collapse
|
37
|
Aliakbarpour H, Prasath VBS, Palaniappan K, Seetharaman G, Dias J. Heterogeneous Multi-View Information Fusion: Review of 3-D Reconstruction Methods and a New Registration with Uncertainty Modeling. IEEE ACCESS 2016. [DOI: 10.1109/access.2016.2629987] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
38
|
Maye J, Sommer H, Agamennoni G, Siegwart R, Furgale P. Online self-calibration for robotic systems. Int J Rob Res 2015. [DOI: 10.1177/0278364915596232] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We present a generic algorithm for self-calibration of robotic systems that utilizes two key innovations. First, it uses an information-theoretic measure to automatically identify and store novel measurement sequences. This keeps the computation tractable by discarding redundant information and allows the system to build a sparse but complete calibration dataset from data collected at different times. Second, as the full observability of the calibration parameters may not be guaranteed for an arbitrary measurement sequence, the algorithm detects and locks unobservable directions in parameter space using a combination of rank-revealing QR and singular value decompositions of the Fisher information matrix. The result is an algorithm that listens to an incoming sensor stream, builds a minimal set of data for estimating the calibration parameters, and updates parameters as they become observable, leaving the others locked at their initial guess. We validate our approach through an extensive set of simulated and real-world experiments.
Collapse
Affiliation(s)
- Jérôme Maye
- Autonomous Systems Lab, ETH Zurich, Switzerland
| | | | | | | | | |
Collapse
|
39
|
Gui J, Gu D, Wang S, Hu H. A review of visual inertial odometry from filtering and optimisation perspectives. Adv Robot 2015. [DOI: 10.1080/01691864.2015.1057616] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
40
|
Planar-Based Visual Inertial Navigation: Observability Analysis and Motion Estimation. J INTELL ROBOT SYST 2015. [DOI: 10.1007/s10846-015-0257-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
41
|
Tully S, Choset H. A Filtering Approach for Image-Guided Surgery With a Highly Articulated Surgical Snake Robot. IEEE Trans Biomed Eng 2015; 63:392-402. [PMID: 26241966 DOI: 10.1109/tbme.2015.2461531] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
GOAL The objective of this paper is to introduce a probabilistic filtering approach to estimate the pose and internal shape of a highly flexible surgical snake robot during minimally invasive surgery. METHODS Our approach renders a depiction of the robot that is registered to preoperatively reconstructed organ models to produce a 3-D visualization that can be used for surgical feedback. Our filtering method estimates the robot shape using an extended Kalman filter that fuses magnetic tracker data with kinematic models that define the motion of the robot. Using Lie derivative analysis, we show that this estimation problem is observable, and thus, the shape and configuration of the robot can be successfully recovered with a sufficient number of magnetic tracker measurements. RESULTS We validate this study with benchtop and in-vivo image-guidance experiments in which the surgical robot was driven along the epicardial surface of a porcine heart. CONCLUSION This paper introduces a filtering approach for shape estimation that can be used for image guidance during minimally invasive surgery. SIGNIFICANCE The methods being introduced in this paper enable informative image guidance for highly articulated surgical robots, which benefits the advancement of robotic surgery.
Collapse
|
42
|
Furgale P, Tong CH, Barfoot TD, Sibley G. Continuous-time batch trajectory estimation using temporal basis functions. Int J Rob Res 2015. [DOI: 10.1177/0278364915585860] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Roboticists often formulate estimation problems in discrete time for the practical reason of keeping the state size tractable; however, the discrete-time approach does not scale well for use with high-rate sensors, such as inertial measurement units, rolling-shutter cameras, or sweeping laser imaging sensors. The difficulty lies in the fact that a pose variable is typically included for every time at which a measurement is acquired, rendering the dimension of the state impractically large for large numbers of measurements. This issue is exacerbated for the simultaneous localization and mapping problem, which further augments the state to include landmark variables. To address this tractability issue, we propose to move the full Maximum-a-Posteriori estimation problem into continuous time and use temporal basis functions to keep the state size manageable. We present a full probabilistic derivation of the continuous-time estimation problem, derive an estimator based on the assumption that the densities and processes involved are Gaussian and show how the coefficients of a relatively small number of basis functions can form the state to be estimated, making the solution efficient. Our derivation is presented in steps of increasingly specific assumptions, opening the door to the development of other novel continuous-time estimation algorithms through the application of different assumptions at any point. We use the simultaneous localization and mapping problem as our motivation throughout the paper, although the approach is not specific to this application. Results from two experiments are provided to validate the approach: (i) self-calibration involving a camera and a high-rate inertial measurement unit, and (ii) perspective localization with a rolling-shutter camera.
Collapse
|
43
|
Oskiper T, Sizintsev M, Branzoi V, Samarasekera S, Kumar R. Augmented Reality Binoculars. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:611-623. [PMID: 26357208 DOI: 10.1109/tvcg.2015.2408612] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper we present an augmented reality binocular system to allow long range high precision augmentation of live telescopic imagery with aerial and terrain based synthetic objects, vehicles, people and effects. The inserted objects must appear stable in the display and must not jitter and drift as the user pans around and examines the scene with the binoculars. The design of the system is based on using two different cameras with wide field of view and narrow field of view lenses enclosed in a binocular shaped shell. Using the wide field of view gives us context and enables us to recover the 3D location and orientation of the binoculars much more robustly, whereas the narrow field of view is used for the actual augmentation as well as to increase precision in tracking. We present our navigation algorithm that uses the two cameras in combination with an inertial measurement unit and global positioning system in an extended Kalman filter and provides jitter free, robust and real-time pose estimation for precise augmentation. We have demonstrated successful use of our system as part of information sharing example as well as a live simulated training system for observer training, in which fixed and rotary wing aircrafts, ground vehicles, and weapon effects are combined with real world scenes.
Collapse
|
44
|
|
45
|
Zhang Y, Tan J, Zeng Z, Liang W, Xia Y. Monocular camera and IMU integration for indoor position estimation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:1198-201. [PMID: 25570179 DOI: 10.1109/embc.2014.6943811] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a monocular camera (MC) and inertial measurement unit (IMU) integrated approach for indoor position estimation. Unlike the traditional estimation methods, we fix the monocular camera downward to the floor and collect successive frames where textures are orderly distributed and feature points robustly detected, rather than using forward oriented camera in sampling unknown and disordered scenes with pre-determined frame rate and auto-focus metric scale. Meanwhile, camera adopts the constant metric scale and adaptive frame rate determined by IMU data. Furthermore, the corresponding distinctive image feature point matching approaches are employed for visual localizing, i.e., optical flow for fast motion mode; Canny Edge Detector & Harris Feature Point Detector & Sift Descriptor for slow motion mode. For superfast motion and abrupt rotation where images from camera are blurred and unusable, the Extended Kalman Filter is exploited to estimate IMU outputs and to derive the corresponding trajectory. Experimental results validate that our proposed method is effective and accurate in indoor positioning. Since our system is computationally efficient and in compact size, it's well suited for visually impaired people indoor navigation and wheelchaired people indoor localization.
Collapse
|
46
|
Leutenegger S, Lynen S, Bosse M, Siegwart R, Furgale P. Keyframe-based visual–inertial odometry using nonlinear optimization. Int J Rob Res 2014. [DOI: 10.1177/0278364914554813] [Citation(s) in RCA: 945] [Impact Index Per Article: 94.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.
Collapse
Affiliation(s)
- Stefan Leutenegger
- Department of Computing, Imperial College London, London, UK
- Autonomous Systems Laboratory (ASL), ETH Zurich, Switzerland
| | - Simon Lynen
- Autonomous Systems Laboratory (ASL), ETH Zurich, Switzerland
| | - Michael Bosse
- Autonomous Systems Laboratory (ASL), ETH Zurich, Switzerland
| | - Roland Siegwart
- Autonomous Systems Laboratory (ASL), ETH Zurich, Switzerland
| | - Paul Furgale
- Autonomous Systems Laboratory (ASL), ETH Zurich, Switzerland
| |
Collapse
|
47
|
Jia C, Evans BL. Online camera-gyroscope autocalibration for cell phones. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:5070-5081. [PMID: 25265608 DOI: 10.1109/tip.2014.2360120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values.
Collapse
|
48
|
Kelly J, Roy N, Sukhatme GS. Determining the Time Delay Between Inertial and Visual Sensor Measurements. IEEE T ROBOT 2014. [DOI: 10.1109/tro.2014.2343073] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
49
|
Zienkiewicz J, Davison A. Extrinsics Autocalibration for Dense Planar Visual Odometry. J FIELD ROBOT 2014. [DOI: 10.1002/rob.21547] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Jacek Zienkiewicz
- Department of Computing; Imperial College London; London United Kingdom
| | - Andrew Davison
- Department of Computing; Imperial College London; London United Kingdom
| |
Collapse
|
50
|
Birbach O, Frese U, Bäuml B. Rapid calibration of a multi-sensorial humanoid’s upper body: An automatic and self-contained approach. Int J Rob Res 2014. [DOI: 10.1177/0278364914548201] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This paper addresses the problem of calibrating a pair of cameras, a Microsoft Kinect sensor and an inertial measurement unit (IMU) mounted at the head of a humanoid robot with respect to its kinematic chain. As complex manipulation tasks require an accurate interplay of all involved sensors, the quality of calibration is crucial for the outcome of the intended tasks. Typical procedures for calibrating are often time-consuming, involve multiple people overseeing a series of subsequent calibration steps and require external tools. We therefore propose to auto-calibrate all sensors in a single, completely automatic and self-contained procedure, i.e. without a calibration plate. By automatically detecting a single point feature on each wrist while moving the robot’s head, the stereo cameras’, the Kinect’s infrared camera’s intrinsic and extrinsic and an IMU’s extrinsic parameters are calibrated while considering the arm joint elasticities and joint angle offsets. All parameters are obtained by formulating the calibration problem as a single least-squares batch-optimization problem. The procedure is integrated on DLR’s humanoid robot Agile Justin allowing to obtain an accurate calibration in around 5 minutes by simply “pushing a button”. The proposed approach is experimentally validated by means of standard metrics of the calibration errors.
Collapse
Affiliation(s)
- Oliver Birbach
- DLR Institute of Robotics and Mechatronics,
Wessling, Germany
| | - Udo Frese
- University of Bremen, Bremen, Germany
| | - Berthold Bäuml
- DLR Institute of Robotics and Mechatronics,
Wessling, Germany
| |
Collapse
|