1
|
Visual SLAM for Unmanned Aerial Vehicles: Localization and Perception. SENSORS (BASEL, SWITZERLAND) 2024; 24:2980. [PMID: 38793834 PMCID: PMC11126069 DOI: 10.3390/s24102980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 05/01/2024] [Accepted: 05/04/2024] [Indexed: 05/26/2024]
Abstract
Localization and perception play an important role as the basis of autonomous Unmanned Aerial Vehicle (UAV) applications, providing the internal state of movements and the external understanding of environments. Simultaneous Localization And Mapping (SLAM), one of the critical techniques for localization and perception, is facing technical upgrading, due to the development of embedded hardware, multi-sensor technology, and artificial intelligence. This survey aims at the development of visual SLAM and the basis of UAV applications. The solutions to critical problems for visual SLAM are shown by reviewing state-of-the-art and newly presented algorithms, providing the research progression and direction in three essential aspects: real-time performance, texture-less environments, and dynamic environments. Visual-inertial fusion and learning-based enhancement are discussed for UAV localization and perception to illustrate their role in UAV applications. Subsequently, the trend of UAV localization and perception is shown. The algorithm components, camera configuration, and data processing methods are also introduced to give comprehensive preliminaries. In this paper, we provide coverage of visual SLAM and its related technologies over the past decade, with a specific focus on their applications in autonomous UAV applications. We summarize the current research, reveal potential problems, and outline future trends from academic and engineering perspectives.
Collapse
|
2
|
A Geomagnetic/ Odometry Integrated Localization Method for Differential Robot Using Real-Time Sequential Particle Filter. SENSORS (BASEL, SWITZERLAND) 2024; 24:2120. [PMID: 38610333 PMCID: PMC11013976 DOI: 10.3390/s24072120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 03/23/2024] [Accepted: 03/23/2024] [Indexed: 04/14/2024]
Abstract
Geomagnetic matching navigation is extensively utilized for localization and navigation of autonomous robots and vehicles owing to its advantages such as low cost, wide-area coverage, and no cumulative errors. However, due to the influence of magnetometer measurement noise, geomagnetic localization algorithms based on single-point particle filters may encounter mismatches during continuous operation, consequently limiting their long-range localization performance. To address this issue, this paper proposes a real-time sequential particle filter-based geomagnetic localization method. Firstly, this method mitigates the impact of noise during continuous operation while ensuring real-time performance by performing real-time sequential particle filtering. Then, it enhances the long-range positioning accuracy of the method by rectifying the trajectory shape of the odometry through odometry calibration parameters. Finally, by performing secondary matching on the preliminary matching results via the MAGCOM algorithm, the positioning error of the method is further minimized. Experimental results show that the proposed method has higher positioning accuracy compared to related algorithms, resulting in reductions of over 28.58%, 37.11%, and 0.77% in RMSE, max error, and error at the end, respectively.
Collapse
|
3
|
From Pixels to Precision: A Survey of Monocular Visual Odometry in Digital Twin Applications. SENSORS (BASEL, SWITZERLAND) 2024; 24:1274. [PMID: 38400432 PMCID: PMC10891866 DOI: 10.3390/s24041274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 02/13/2024] [Accepted: 02/13/2024] [Indexed: 02/25/2024]
Abstract
This survey provides a comprehensive overview of traditional techniques and deep learning-based methodologies for monocular visual odometry (VO), with a focus on displacement measurement applications. This paper outlines the fundamental concepts and general procedures for VO implementation, including feature detection, tracking, motion estimation, triangulation, and trajectory estimation. This paper also explores the research challenges inherent in VO implementation, including scale estimation and ground plane considerations. The scientific literature is rife with diverse methodologies aiming to overcome these challenges, particularly focusing on the problem of accurate scale estimation. This issue has been typically addressed through the reliance on knowledge regarding the height of the camera from the ground plane and the evaluation of feature movements on that plane. Alternatively, some approaches have utilized additional tools, such as LiDAR or depth sensors. This survey of approaches concludes with a discussion of future research challenges and opportunities in the field of monocular visual odometry.
Collapse
|
4
|
Inertial Navigation on Extremely Resource-Constrained Platforms: Methods, Opportunities and Challenges. IEEE/ION POSITION LOCATION AND NAVIGATION SYMPOSIUM : [PROCEEDINGS]. IEEE/ION POSITION LOCATION AND NAVIGATION SYMPOSIUM 2023; 2023:708-723. [PMID: 37736264 PMCID: PMC10512424 DOI: 10.1109/plans53410.2023.10139997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/23/2023]
Abstract
Inertial navigation provides a small footprint, low-power, and low-cost pathway for localization in GPS-denied environments on extremely resource-constrained Internet-of-Things (IoT) platforms. Traditionally, application-specific heuristics and physics-based kinematic models are used to mitigate the curse of drift in inertial odometry. These techniques, albeit lightweight, fail to handle domain shifts and environmental non-linearities. Recently, deep neural-inertial sequence learning has shown superior odometric resolution in capturing non-linear motion dynamics without human knowledge over heuristic-based methods. These AI-based techniques are data-hungry, suffer from excessive resource usage, and cannot guarantee following the underlying system physics. This paper highlights the unique methods, opportunities, and challenges in porting real-time AI-enhanced inertial navigation algorithms onto IoT platforms. First, we discuss how platform-aware neural architecture search coupled with ultra-lightweight model backbones can yield neural-inertial odometry models that are 31-134× smaller yet achieve or exceed the localization resolution of state-of-the-art AI-enhanced techniques. The framework can generate models suitable for locating humans, animals, underwater sensors, aerial vehicles, and precision robots. Next, we showcase how techniques from neurosymbolic AI can yield physics-informed and interpretable neural-inertial navigation models. Afterward, we present opportunities for fine-tuning pre-trained odometry models in a new domain with as little as 1 minute of labeled data, while discussing inexpensive data collection and labeling techniques. Finally, we identify several open research challenges that demand careful consideration moving forward.
Collapse
|
5
|
RIOT: Recursive Inertial Odometry Transformer for Localisation from Low-Cost IMU Measurements. SENSORS (BASEL, SWITZERLAND) 2023; 23:3217. [PMID: 36991926 PMCID: PMC10057007 DOI: 10.3390/s23063217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 03/10/2023] [Accepted: 03/14/2023] [Indexed: 06/19/2023]
Abstract
Inertial localisation is an important technique as it enables ego-motion estimation in conditions where external observers are unavailable. However, low-cost inertial sensors are inherently corrupted by bias and noise, which lead to unbound errors, making straight integration for position intractable. Traditional mathematical approaches are reliant on prior system knowledge, geometric theories and are constrained by predefined dynamics. Recent advances in deep learning, which benefit from ever-increasing volumes of data and computational power, allow for data-driven solutions that offer more comprehensive understanding. Existing deep inertial odometry solutions rely on estimating the latent states, such as velocity, or are dependent on fixed-sensor positions and periodic motion patterns. In this work, we propose taking the traditional state estimation recursive methodology and applying it in the deep learning domain. Our approach, which incorporates the true position priors in the training process, is trained on inertial measurements and ground truth displacement data, allowing recursion and learning both motion characteristics and systemic error bias and drift. We present two end-to-end frameworks for pose invariant deep inertial odometry that utilises self-attention to capture both spatial features and long-range dependencies in inertial data. We evaluate our approaches against a custom 2-layer Gated Recurrent Unit, trained in the same manner on the same data, and tested each approach on a number of different users, devices and activities. Each network had a sequence length weighted relative trajectory error mean ≤0.4594 m, highlighting the effectiveness of our learning process used in the development of the models.
Collapse
|
6
|
Improving Odometric Model Performance Based on LSTM Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:961. [PMID: 36679759 PMCID: PMC9863937 DOI: 10.3390/s23020961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 01/08/2023] [Accepted: 01/09/2023] [Indexed: 06/17/2023]
Abstract
This paper presents a localization system for an autonomous wheelchair that includes several sensors, such as odometers, LIDARs, and an IMU. It focuses on improving the odometric localization accuracy using an LSTM neural network. Improved odometry will improve the result of the localization algorithm, obtaining a more accurate pose. The localization system is composed by a neural network designed to estimate the current pose using the odometric encoder information as input. The training is carried out by analyzing multiple random paths and defining the velodyne sensor data as training ground truth. During wheelchair navigation, the localization system retrains the network in real time to adjust any change or systematic error that occurs with respect to the initial conditions. Furthermore, another network manages to avoid certain random errors by using the relationship between the power consumed by the motors and the actual wheel speeds. The experimental results show several examples that demonstrate the ability to self-correct against variations over time, and to detect non-systematic errors in different situations using this relation. The final robot localization is improved with the designed odometric model compared to the classic robot localization based on sensor fusion using a static covariance.
Collapse
|
7
|
MAV Localization in Large-Scale Environments: A Decoupled Optimization/Filtering Approach. SENSORS (BASEL, SWITZERLAND) 2023; 23:516. [PMID: 36617114 PMCID: PMC9824358 DOI: 10.3390/s23010516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/28/2022] [Accepted: 12/29/2022] [Indexed: 06/17/2023]
Abstract
Developing new sensor fusion algorithms has become indispensable to tackle the daunting problem of GPS-aided micro aerial vehicle (MAV) localization in large-scale landscapes. Sensor fusion should guarantee high-accuracy estimation with the least amount of system delay. Towards this goal, we propose a linear optimal state estimation approach for the MAV to avoid complicated and high-latency calculations and an immediate metric-scale recovery paradigm that uses low-rate noisy GPS measurements when available. Our proposed strategy shows how the vision sensor can quickly bootstrap a pose that has been arbitrarily scaled and recovered from various drifts that affect vision-based algorithms. We can consider the camera as a "black-box" pose estimator thanks to our proposed optimization/filtering-based methodology. This maintains the sensor fusion algorithm's computational complexity and makes it suitable for MAV's long-term operations in expansive areas. Due to the limited global tracking and localization data from the GPS sensors, our proposal on MAV's localization solution considers the sensor measurement uncertainty constraints under such circumstances. Extensive quantitative and qualitative analyses utilizing real-world and large-scale MAV sequences demonstrate the higher performance of our technique in comparison to most recent state-of-the-art algorithms in terms of trajectory estimation accuracy and system latency.
Collapse
|
8
|
Towards Accurate Ground Plane Normal Estimation from Ego-Motion. SENSORS (BASEL, SWITZERLAND) 2022; 22:9375. [PMID: 36502078 PMCID: PMC9741436 DOI: 10.3390/s22239375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/26/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
In this paper, we introduce a novel approach for ground plane normal estimation of wheeled vehicles. In practice, the ground plane is dynamically changed due to braking and unstable road surface. As a result, the vehicle pose, especially the pitch angle, is oscillating from subtle to obvious. Thus, estimating ground plane normal is meaningful since it can be encoded to improve the robustness of various autonomous driving tasks (e.g., 3D object detection, road surface reconstruction, and trajectory planning). Our proposed method only uses odometry as input and estimates accurate ground plane normal vectors in real time. Particularly, it fully utilizes the underlying connection between the ego pose odometry (ego-motion) and its nearby ground plane. Built on that, an Invariant Extended Kalman Filter (IEKF) is designed to estimate the normal vector in the sensor's coordinate. Thus, our proposed method is simple yet efficient and supports both camera- and inertial-based odometry algorithms. Its usability and the marked improvement of robustness are validated through multiple experiments on public datasets. For instance, we achieve state-of-the-art accuracy on KITTI dataset with the estimated vector error of 0.39°.
Collapse
|
9
|
NR-UIO: NLOS-Robust UWB-Inertial Odometry Based on Interacting Multiple Model and NLOS Factor Estimation. SENSORS 2021; 21:s21237886. [PMID: 34883890 PMCID: PMC8659580 DOI: 10.3390/s21237886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 10/29/2021] [Accepted: 11/23/2021] [Indexed: 12/01/2022]
Abstract
Recently, technology utilizing ultra-wideband (UWB) sensors for robot localization in an indoor environment where the global navigation satellite system (GNSS) cannot be used has begun to be actively studied. UWB-based positioning has the advantage of being able to work even in an environment lacking feature points, which is a limitation of positioning using existing vision- or LiDAR-based sensing. However, UWB-based positioning requires the pre-installation of UWB anchors and the precise location of coordinates. In addition, when using a sensor that measures only the one-dimensional distance between the UWB anchor and the tag, there is a limitation whereby the position of the robot is solved but the orientation cannot be acquired. To overcome this, a framework based on an interacting multiple model (IMM) filter that tightly integrates an inertial measurement unit (IMU) sensor and a UWB sensor is proposed in this paper. However, UWB-based distance measurement introduces large errors in multipath environments with obstacles or walls between the anchor and the tag, which degrades positioning performance. Therefore, we propose a non-line-of-sight (NLOS) robust UWB ranging model to improve the pose estimation performance. Finally, the localization performance of the proposed framework is verified through experiments in real indoor environments.
Collapse
|
10
|
Drosophila re-zero their path integrator at the center of a fictive food patch. Curr Biol 2021; 31:4534-4546.e5. [PMID: 34450090 PMCID: PMC8551043 DOI: 10.1016/j.cub.2021.08.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 07/12/2021] [Accepted: 08/02/2021] [Indexed: 11/17/2022]
Abstract
The ability to keep track of one's location in space is a critical behavior for animals navigating to and from a salient location, and its computational basis is now beginning to be unraveled. Here, we tracked flies in a ring-shaped channel as they executed bouts of search triggered by optogenetic activation of sugar receptors. Unlike experiments in open field arenas, which produce highly tortuous search trajectories, our geometrically constrained paradigm enabled us to monitor flies' decisions to move toward or away from the fictive food. Our results suggest that flies use path integration to remember the location of a food site even after it has disappeared, and flies can remember the location of a former food site even after walking around the arena one or more times. To determine the behavioral algorithms underlying Drosophila search, we developed multiple state transition models and found that flies likely accomplish path integration by combining odometry and compass navigation to keep track of their position relative to the fictive food. Our results indicate that whereas flies re-zero their path integrator at food when only one feeding site is present, they adjust their path integrator to a central location between sites when experiencing food at two or more locations. Together, this work provides a simple experimental paradigm and theoretical framework to advance investigations of the neural basis of path integration.
Collapse
|
11
|
Comparison and Evaluation of Integrity Algorithms for Vehicle Dynamic State Estimation in Different Scenarios for an Application in Automated Driving. SENSORS 2021; 21:s21041458. [PMID: 33669776 PMCID: PMC7923085 DOI: 10.3390/s21041458] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 02/08/2021] [Accepted: 02/15/2021] [Indexed: 11/17/2022]
Abstract
High-integrity information about the vehicle’s dynamic state, including position and heading (yaw angle), is required in order to implement automated driving functions. In this work, a comparison of three integrity algorithms for the vehicle dynamic state estimation of a research vehicle for an application in automated driving is presented. Requirements for this application are derived from the literature. All implemented integrity algorithms output a protection level for the position and heading solution. In the comparison, four measurement data sets obtained for the vehicle dynamic state estimation, which is based on a Global Navigation Satellite Signal receiver, inertial measurement units and odometry information (wheel speeds and steering angles), are used. The data sets represent four driving scenarios with different environmental conditions, especially regarding the satellite signal reception. All in all, the Kalman Integrated Protection Level demonstrated the best performance out of the three implemented integrity algorithms. Its protection level bounds the position error within the specified integrity risk in all four chosen scenarios. For the heading error, this also holds true, with a slight exception in the very challenging urban scenario.
Collapse
|
12
|
Modular Approach for Odometry Localization Method for Vehicles with Increased Maneuverability. SENSORS 2020; 21:s21010079. [PMID: 33375569 PMCID: PMC7795507 DOI: 10.3390/s21010079] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 12/17/2020] [Accepted: 12/22/2020] [Indexed: 11/16/2022]
Abstract
Localization and navigation not only serve to provide positioning and route guidance information for users, but also are important inputs for vehicle control. This paper investigates the possibility of using odometry to estimate the position and orientation of a vehicle with a wheel individual steering system in omnidirectional parking maneuvers. Vehicle models and sensors have been identified for this application. Several odometry versions are designed using a modular approach, which was developed in this paper to help users to design state estimators. Different odometry versions have been implemented and validated both in the simulation environment and in real driving tests. The evaluated results show that the versions using more models and using state variables in models provide both more accurate and more robust estimation.
Collapse
|
13
|
A Novel IMU Extrinsic Calibration Method for Mass Production Land Vehicles. SENSORS 2020; 21:s21010007. [PMID: 33374942 PMCID: PMC7792609 DOI: 10.3390/s21010007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 12/13/2020] [Accepted: 12/16/2020] [Indexed: 11/17/2022]
Abstract
Multi-modal sensor fusion has become ubiquitous in the field of vehicle motion estimation. Achieving a consistent sensor fusion in such a set-up demands the precise knowledge of the misalignments between the coordinate systems in which the different information sources are expressed. In ego-motion estimation, even sub-degree misalignment errors lead to serious performance degradation. The present work addresses the extrinsic calibration of a land vehicle equipped with standard production car sensors and an automotive-grade inertial measurement unit (IMU). Specifically, the article presents a method for the estimation of the misalignment between the IMU and vehicle coordinate systems, while considering the IMU biases. The estimation problem is treated as a joint state and parameter estimation problem, and solved using an adaptive estimator that relies on the IMU measurements, a dynamic single-track model as well as the suspension and odometry systems. Additionally, we show that the validity of the misalignment estimates can be assessed by identifying the misalignment between a high-precision INS/GNSS and the IMU and vehicle coordinate systems. The effectiveness of the proposed calibration procedure is demonstrated using real sensor data. The results show that estimation accuracies below 0.1 degrees can be achieved in spite of moderate variations in the manoeuvre execution.
Collapse
|
14
|
Autonomous Road Roundabout Detection and Navigation System for Smart Vehicles and Cities Using Laser Simulator-Fuzzy Logic Algorithms and Sensor Fusion. SENSORS 2020; 20:s20133694. [PMID: 32630340 PMCID: PMC7374500 DOI: 10.3390/s20133694] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 06/18/2020] [Accepted: 06/24/2020] [Indexed: 11/25/2022]
Abstract
A real-time roundabout detection and navigation system for smart vehicles and cities using laser simulator–fuzzy logic algorithms and sensor fusion in a road environment is presented in this paper. A wheeled mobile robot (WMR) is supposed to navigate autonomously on the road in real-time and reach a predefined goal while discovering and detecting the road roundabout. A complete modeling and path planning of the road’s roundabout intersection was derived to enable the WMR to navigate autonomously in indoor and outdoor terrains. A new algorithm, called Laser Simulator, has been introduced to detect various entities in a road roundabout setting, which is later integrated with fuzzy logic algorithm for making the right decision about the existence of the roundabout. The sensor fusion process involving the use of a Wi-Fi camera, laser range finder, and odometry was implemented to generate the robot’s path planning and localization within the road environment. The local maps were built using the extracted data from the camera and laser range finder to estimate the road parameters such as road width, side curbs, and roundabout center, all in two-dimensional space. The path generation algorithm was fully derived within the local maps and tested with a WMR platform in real-time.
Collapse
|
15
|
Self-Adaptive Filtering Approach for Improved Indoor Localization of a Mobile Node with Zigbee-Based RSSI and Odometry. SENSORS 2019; 19:s19214748. [PMID: 31683837 PMCID: PMC6864824 DOI: 10.3390/s19214748] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 10/24/2019] [Accepted: 10/30/2019] [Indexed: 11/17/2022]
Abstract
This study presents a new technique to improve the indoor localization of a mobile node by utilizing a Zigbee-based received-signal-strength indicator (RSSI) and odometry. As both methods suffer from their own limitations, this work contributes to a novel methodological framework in which coordinates of the mobile node can more accurately be predicted by improving the path-loss propagation model and optimizing the weighting parameter for each localization technique via a convex search. A self-adaptive filtering approach is also proposed which autonomously optimizes the weighting parameter during the target node's translational and rotational motions, thus resulting in an efficient localization scheme with less computational effort. Several real-time experiments consisting of four different trajectories with different number of straight paths and curves were carried out to validate the proposed methods. Both temporal and spatial analyses demonstrate that when odometry data and RSSI values are available, the proposed methods provide significant improvements on localization performance over existing approaches.
Collapse
|
16
|
End-to-End Learning Framework for IMU-Based 6-DOF Odometry. SENSORS 2019; 19:s19173777. [PMID: 31480413 PMCID: PMC6749526 DOI: 10.3390/s19173777] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 08/22/2019] [Accepted: 08/29/2019] [Indexed: 11/17/2022]
Abstract
This paper presents an end-to-end learning framework for performing 6-DOF odometry by using only inertial data obtained from a low-cost IMU. The proposed inertial odometry method allows leveraging inertial sensors that are widely available on mobile platforms for estimating their 3D trajectories. For this purpose, neural networks based on convolutional layers combined with a two-layer stacked bidirectional LSTM are explored from the following three aspects. First, two 6-DOF relative pose representations are investigated: one based on a vector in the spherical coordinate system, and the other based on both a translation vector and an unit quaternion. Second, the loss function in the network is designed with the combination of several 6-DOF pose distance metrics: mean squared error, translation mean absolute error, quaternion multiplicative error and quaternion inner product. Third, a multi-task learning framework is integrated to automatically balance the weights of multiple metrics. In the evaluation, qualitative and quantitative analyses were conducted with publicly-available inertial odometry datasets. The best combination of the relative pose representation and the loss function was the translation and quaternion together with the translation mean absolute error and quaternion multiplicative error, which obtained more accurate results with respect to state-of-the-art inertial odometry techniques.
Collapse
|
17
|
Neural Network Based Uncertainty Prediction for Autonomous Vehicle Application. Front Neurorobot 2019; 13:12. [PMID: 31133839 PMCID: PMC6524408 DOI: 10.3389/fnbot.2019.00012] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 03/18/2019] [Indexed: 11/30/2022] Open
Abstract
This paper proposes a framework for uncertainty prediction in complex fusion networks, where signals become available sporadically. Assuming there is no information of the sensor characteristics available, a surrogated model of the sensor uncertainty is yielded directly from data through artificial neural networks. The strategy developed is applied to autonomous vehicle localization through odometry sensors (speed and orientation), so as to determine the location uncertainty in the trajectory. The results obtained allow for fusion of autonomous vehicle location measurements, and effective correction of the accumulated odometry error in most scenarios. The neural networks applicability and generalization capacity are proven, evidencing the suitability of the presented methodology for uncertainty estimation in non-linear and intractable processes.
Collapse
|
18
|
Benefits of Multi-Constellation/Multi-Frequency GNSS in a Tightly Coupled GNSS/IMU/ Odometry Integration Algorithm. SENSORS 2018; 18:s18093052. [PMID: 30213078 PMCID: PMC6163901 DOI: 10.3390/s18093052] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Revised: 09/06/2018] [Accepted: 09/07/2018] [Indexed: 11/16/2022]
Abstract
Localization algorithms based on global navigation satellite systems (GNSS) play an important role in automotive positioning. Due to the advent of autonomously driving cars, their importance is expected to grow even further in the next years. Simultaneously, the performance requirements for these localization algorithms will increase because they are no longer used exclusively for navigation, but also for control of the vehicle’s movement. These requirements cannot be met with GNSS alone. Instead, algorithms for sensor data fusion are needed. While the combination of GNSS receivers with inertial measurements units (IMUs) is a common approach, it is traditionally executed in a single-frequency/single-constellation architecture, usually with the Global Positioning System’s (GPS) L1 C/A signal. With the advent of new GNSS constellations and civil signals on multiple frequencies, GNSS/IMU integration algorithm performance can be improved by utilizing these new data sources. To achieve this, we upgraded a tightly coupled GNSS/IMU integration algorithm to process measurements from GPS (L1 C/A, L2C, L5) and Galileo (E1, E5a, E5b). After investigating various combination strategies, we chose to preferably work with ionosphere-free combinations of L5-L1 C/A and E5a-E1 pseudo-ranges. L2C-L1 C/A and E5b-E1 combinations as well as single-frequency pseudo-ranges on L1 and E1 serve as backup when no L5/E5a measurements are available. To be able to process these six types of pseudo-range observations simultaneously, the differential code biases (DCBs) of the employed receiver need to be calibrated. Time-differenced carrier-phase measurements on L1 and E1 provide the algorithm with pseudo-range-rate observations. To provide additional aiding, information about the vehicle’s velocity obtained by an odometry model fed with angular velocities from all four wheels as well as the steering wheel angle is incorporated into the algorithm. To evaluate the performance improvement provided by these new data sources, two sets of measurement data are collected and the resulting navigation solutions are compared to a higher-grade reference system, consisting of a geodetic GNSS receiver for real-time kinematic positioning (RTK) and a navigation grade IMU. The multi-frequency/multi-constellation algorithm with odometry aiding achieves a 3-D root mean square (RMS) position error of 3.6m/2.1m in these data sets, compared to 5.2m/2.9m for the single-frequency GPS algorithm without odometry aiding. Odometry is most beneficial to positioning accuracy when GNSS measurement quality is poor. This is demonstrated in data set 1, resulting in a reduction of the horizontal position error’s 95% quantile from 6.2m without odometry aiding to 4.2m with odometry aiding.
Collapse
|
19
|
Improving Odometric Accuracy for an Autonomous Electric Cart. SENSORS 2018; 18:s18010200. [PMID: 29329205 PMCID: PMC5795339 DOI: 10.3390/s18010200] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 01/09/2018] [Accepted: 01/10/2018] [Indexed: 11/17/2022]
Abstract
In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.
Collapse
|
20
|
Abstract
This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and 4% for all the cases studied and angular accuracy of better than 2% (of absolute cumulative changes in orientation) were achieved with this method. This study provides a robust and accurate tracking framework that not only can be integrated with the current C-arm joint-tracking system (i.e. TC-arm) but also is capable of being employed for similar applications in other fields (e.g. robotics).
Collapse
|
21
|
Estimation of self-motion duration and distance in rodents. ROYAL SOCIETY OPEN SCIENCE 2016; 3:160118. [PMID: 27293792 PMCID: PMC4892454 DOI: 10.1098/rsos.160118] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2016] [Accepted: 04/26/2016] [Indexed: 06/06/2023]
Abstract
Spatial orientation and navigation rely on information about landmarks and self-motion cues gained from multi-sensory sources. In this study, we focused on self-motion and examined the capability of rodents to extract and make use of information about own movement, i.e. path integration. Path integration has been investigated in depth in insects and humans. Demonstrations in rodents, however, mostly stem from experiments on heading direction; less is known about distance estimation. We introduce a novel behavioural paradigm that allows for probing temporal and spatial contributions to path integration. The paradigm is a bisection task comprising movement in a virtual reality environment in combination with either timing the duration ran or estimating the distance covered. We performed experiments with Mongolian gerbils and could show that the animals can keep track of time and distance during spatial navigation.
Collapse
|
22
|
Abstract
In 1709, Berkeley hypothesized of the human that distance is measurable by 'the motion of his body, which is perceivable by touch'. To be sufficiently general and reliable, Berkeley's hypothesis must imply that distance measured by legged locomotion approximates actual distance, with the measure invariant to gait, speed and number of steps. We studied blindfolded human participants in a task in which they travelled by legged locomotion from a fixed starting point A to a variable terminus B, and then reproduced, by legged locomotion from B, the A-B distance. The outbound ('measure') and return ('report') gait could be the same or different, with similar or dissimilar step sizes and step frequencies. In five experiments we manipulated bipedal gait according to the primary versus secondary distinction revealed in symmetry group analyses of locomotion patterns. Berkeley's hypothesis held only when the measure and report gaits were of the same symmetry class, indicating that idiothetic distance measurement is gait-symmetry specific. Results suggest that human odometry (and perhaps animal odometry more generally) entails variables that encompass the limbs in coordination, such as global phase, and not variables at the level of the single limb, such as step length and step number, as traditionally assumed.
Collapse
|