1
|
Ding H, Zhang B, Zhou J, Yan Y, Tian G, Gu B. Recent developments and applications of simultaneous localization and mapping in agriculture. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22077] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Affiliation(s)
- Haizhou Ding
- Department of Electronic Information, College of Artificial Intelligence Nanjing Agricultural University Nanjing Jiangsu China
| | - Baohua Zhang
- Department of Automation, College of Artificial Intelligence Nanjing Agricultural University Nanjing Jiangsu China
| | - Jun Zhou
- Department of Agricultural Engineering, College of Engineering Nanjing Agricultural University Nanjing Jiangsu China
| | - Yaxuan Yan
- Department of Electronic Information, College of Artificial Intelligence Nanjing Agricultural University Nanjing Jiangsu China
| | - Guangzhao Tian
- Department of Agricultural Engineering, College of Engineering Nanjing Agricultural University Nanjing Jiangsu China
| | - Baoxing Gu
- Department of Agricultural Engineering, College of Engineering Nanjing Agricultural University Nanjing Jiangsu China
| |
Collapse
|
2
|
SLC-VIO: a stereo visual-inertial odometry based on structural lines and points belonging to lines. ROBOTICA 2022. [DOI: 10.1017/s0263574721001958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
To improve mobile robot positioning accuracy in building environments and construct structural three-dimensional (3D) maps, this paper proposes a stereo visual-inertial odometry (VIO) system based on structural lines and points belonging to lines. The 2-degree-of-freedom (DoF) spatial structural lines based on the Manhattan world assumption are used to establish visual measurement constraints. The property of point belonging to a line (PPBL) is used to initialize the structural lines and establish spatial distance-residual constraints between point and line landmarks in the reconstructed 3D map. Compared with the 4-DoF spatial straight line, the 2-DoF structural line reduces the variables to be estimated and introduces the orientation information of scenes to the VIO system. The utilization of PPBL makes the proposed system fully exploit the prior geometric information of environments and then achieves better performance. Tests on public data sets and real-world experiments show that the proposed system can achieve higher positioning accuracy and construct 3D maps that better reflect the structure of scenes than existing VIO approaches.
Collapse
|
3
|
Uncertainty Estimation of Dense Optical Flow for Robust Visual Navigation. SENSORS 2021; 21:s21227603. [PMID: 34833677 PMCID: PMC8619691 DOI: 10.3390/s21227603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 11/09/2021] [Accepted: 11/09/2021] [Indexed: 11/17/2022]
Abstract
This paper presents a novel dense optical-flow algorithm to solve the monocular simultaneous localisation and mapping (SLAM) problem for ground or aerial robots. Dense optical flow can effectively provide the ego-motion of the vehicle while enabling collision avoidance with the potential obstacles. Existing research has not fully utilised the uncertainty of the optical flow-at most, an isotropic Gaussian density model has been used. We estimate the full uncertainty of the optical flow and propose a new eight-point algorithm based on the statistical Mahalanobis distance. Combined with the pose-graph optimisation, the proposed method demonstrates enhanced robustness and accuracy for the public autonomous car dataset (KITTI) and aerial monocular dataset.
Collapse
|
4
|
Multiple Drone Navigation and Formation Using Selective Target Tracking-Based Computer Vision. ELECTRONICS 2021. [DOI: 10.3390/electronics10172125] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Autonomous unmanned aerial vehicles work seamlessly within the GPS signal range, but their performance deteriorates in GPS-denied regions. This paper presents a unique collaborative computer vision-based approach for target tracking as per the image’s specific location of interest. The proposed method tracks any object without considering its properties like shape, color, size, or pattern. It is required to keep the target visible and line of sight during the tracking. The method gives freedom of selection to a user to track any target from the image and form a formation around it. We calculate the parameters like distance and angle from the image center to the object for the individual drones. Among all the drones, the one with a significant GPS signal strength or nearer to the target is chosen as the master drone to calculate the relative angle and distance between an object and other drones considering approximate Geo-location. Compared to actual measurements, the results of tests done on a quadrotor UAV frame achieve 99% location accuracy in a robust environment inside the exact GPS longitude and latitude block as GPS-only navigation methods. The individual drones communicate to the ground station through a telemetry link. The master drone calculates the parameters using data collected at ground stations. Various formation flying methods help escort other drones to meet the desired objective with a single high-resolution first-person view (FPV) camera. The proposed method is tested for Airborne Object Target Tracking (AOT) aerial vehicle model and achieves higher tracking accuracy.
Collapse
|
5
|
Pessanha Santos N, Lobo V, Bernardino A. Unscented Particle Filters with Refinement Steps for UAV Pose Tracking. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01409-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
6
|
Optical Navigation Sensor for Runway Relative Positioning of Aircraft during Final Approach. SENSORS 2021; 21:s21062203. [PMID: 33801137 PMCID: PMC8004248 DOI: 10.3390/s21062203] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 02/28/2021] [Accepted: 03/18/2021] [Indexed: 11/22/2022]
Abstract
Precise navigation is often performed by sensor fusion of different sensors. Among these sensors, optical sensors use image features to obtain the position and attitude of the camera. Runway relative navigation during final approach is a special case where robust and continuous detection of the runway is required. This paper presents a robust threshold marker detection method for monocular cameras and introduces an on-board real-time implementation with flight test results. Results with narrow and wide field-of-view optics are compared. The image processing approach is also evaluated on image data captured by a different on-board system. The pure optical approach of this paper increases sensor redundancy because it does not require input from an inertial sensor as most of the robust runway detectors.
Collapse
|
7
|
Embedded Computation Architectures for Autonomy in Unmanned Aircraft Systems (UAS). SENSORS 2021; 21:s21041115. [PMID: 33562676 PMCID: PMC7915191 DOI: 10.3390/s21041115] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 01/19/2021] [Accepted: 01/23/2021] [Indexed: 12/05/2022]
Abstract
This paper addresses the challenge of embedded computing resources required by future autonomous Unmanned Aircraft Systems (UAS). Based on an analysis of the required onboard functions that will lead to higher levels of autonomy, we look at most common UAS tasks to first propose a classification of UAS tasks considering categories such as flight, navigation, safety, mission and executing entities such as human, offline machine, embedded system. We then analyse how a given combination of tasks can lead to higher levels of autonomy by defining an autonomy level. We link UAS applications, the tasks required by those applications, the autonomy level and the implications on computing resources to achieve that autonomy level. We provide insights on how to define a given autonomy level for a given application based on a number of tasks. Our study relies on the state-of-the-art hardware and software implementations of the most common tasks currently used by UAS, also expected tasks according to the nature of their future missions. We conclude that current computing architectures are unlikely to meet the autonomy requirements of future UAS. Our proposed approach is based on dynamically reconfigurable hardware that offers benefits in computational performance and energy usage. We believe that UAS designers must now consider the embedded system as a masterpiece of the system.
Collapse
|
8
|
Kangunde V, Jamisola RS, Theophilus EK. A review on drones controlled in real-time. ACTA ACUST UNITED AC 2021; 9:1832-1846. [PMID: 33425650 PMCID: PMC7785038 DOI: 10.1007/s40435-020-00737-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 11/11/2020] [Accepted: 11/19/2020] [Indexed: 11/25/2022]
Abstract
This paper presents related literature review on drones or unmanned aerial vehicles that are controlled in real-time. Systems in real-time control create more deterministic response such that tasks are guaranteed to be completed within a specified time. This system characteristic is very much desirable for drones that are now required to perform more sophisticated tasks. The reviewed materials presented were chosen to highlight drones that are controlled in real time, and to include technologies used in different applications of drones. Progress has been made in the development of highly maneuverable drones for applications such as monitoring, aerial mapping, military combat, agriculture, etc. The control of such highly maneuverable vehicles presents challenges such as real-time response, workload management, and complex control. This paper endeavours to discuss real-time aspects of drones control as well as possible implementation of real-time flight control system to enhance drones performance.
Collapse
|
9
|
Monocular Visual SLAM Based on a Cooperative UAV-Target System. SENSORS 2020; 20:s20123531. [PMID: 32580347 PMCID: PMC7378774 DOI: 10.3390/s20123531] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 06/13/2020] [Accepted: 06/18/2020] [Indexed: 11/17/2022]
Abstract
To obtain autonomy in applications that involve Unmanned Aerial Vehicles (UAVs), the capacity of self-location and perception of the operational environment is a fundamental requirement. To this effect, GPS represents the typical solution for determining the position of a UAV operating in outdoor and open environments. On the other hand, GPS cannot be a reliable solution for a different kind of environments like cluttered and indoor ones. In this scenario, a good alternative is represented by the monocular SLAM (Simultaneous Localization and Mapping) methods. A monocular SLAM system allows a UAV to operate in a priori unknown environment using an onboard camera to simultaneously build a map of its surroundings while at the same time locates itself respect to this map. So, given the problem of an aerial robot that must follow a free-moving cooperative target in a GPS denied environment, this work presents a monocular-based SLAM approach for cooperative UAV-Target systems that addresses the state estimation problem of (i) the UAV position and velocity, (ii) the target position and velocity, (iii) the landmarks positions (map). The proposed monocular SLAM system incorporates altitude measurements obtained from an altimeter. In this case, an observability analysis is carried out to show that the observability properties of the system are improved by incorporating altitude measurements. Furthermore, a novel technique to estimate the approximate depth of the new visual landmarks is proposed, which takes advantage of the cooperative target. Additionally, a control system is proposed for maintaining a stable flight formation of the UAV with respect to the target. In this case, the stability of control laws is proved using the Lyapunov theory. The experimental results obtained from real data as well as the results obtained from computer simulations show that the proposed scheme can provide good performance.
Collapse
|
10
|
Computer Vision in Autonomous Unmanned Aerial Vehicles—A Systematic Mapping Study. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9153196] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Personal assistant robots provide novel technological solutions in order to monitor people’s activities, helping them in their daily lives. In this sense, unmanned aerial vehicles (UAVs) can also bring forward a present and future model of assistant robots. To develop aerial assistants, it is necessary to address the issue of autonomous navigation based on visual cues. Indeed, navigating autonomously is still a challenge in which computer vision technologies tend to play an outstanding role. Thus, the design of vision systems and algorithms for autonomous UAV navigation and flight control has become a prominent research field in the last few years. In this paper, a systematic mapping study is carried out in order to obtain a general view of this subject. The study provides an extensive analysis of papers that address computer vision as regards the following autonomous UAV vision-based tasks: (1) navigation, (2) control, (3) tracking or guidance, and (4) sense-and-avoid. The works considered in the mapping study—a total of 144 papers from an initial set of 2081—have been classified under the four categories above. Moreover, type of UAV, features of the vision systems employed and validation procedures are also analyzed. The results obtained make it possible to draw conclusions about the research focuses, which UAV platforms are mostly used in each category, which vision systems are most frequently employed, and which types of tests are usually performed to validate the proposed solutions. The results of this systematic mapping study demonstrate the scientific community’s growing interest in the development of vision-based solutions for autonomous UAVs. Moreover, they will make it possible to study the feasibility and characteristics of future UAVs taking the role of personal assistants.
Collapse
|
11
|
Visual-Based SLAM Configurations for Cooperative Multi-UAV Systems with a Lead Agent: An Observability-Based Approach. SENSORS 2018; 18:s18124243. [PMID: 30513949 PMCID: PMC6308766 DOI: 10.3390/s18124243] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2018] [Revised: 11/28/2018] [Accepted: 11/28/2018] [Indexed: 11/17/2022]
Abstract
In this work, the problem of the cooperative visual-based SLAM for the class of multi-UA systems that integrates a lead agent has been addressed. In these kinds of systems, a team of aerial robots flying in formation must follow a dynamic lead agent, which can be another aerial robot, vehicle or even a human. A fundamental problem that must be addressed for these kinds of systems has to do with the estimation of the states of the aerial robots as well as the state of the lead agent. In this work, the use of a cooperative visual-based SLAM approach is studied in order to solve the above problem. In this case, three different system configurations are proposed and investigated by means of an intensive nonlinear observability analysis. In addition, a high-level control scheme is proposed that allows to control the formation of the UAVs with respect to the lead agent. In this work, several theoretical results are obtained, together with an extensive set of computer simulations which are presented in order to numerically validate the proposal and to show that it can perform well under different circumstances (e.g., GPS-challenging environments). That is, the proposed method is able to operate robustly under many conditions providing a good position estimation of the aerial vehicles and the lead agent as well.
Collapse
|
12
|
Trujillo JC, Munguia R, Guerra E, Grau A. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments. SENSORS 2018; 18:s18051351. [PMID: 29701722 PMCID: PMC5981868 DOI: 10.3390/s18051351] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2018] [Revised: 04/03/2018] [Accepted: 04/19/2018] [Indexed: 11/16/2022]
Abstract
This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.
Collapse
Affiliation(s)
- Juan-Carlos Trujillo
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara 44430, Mexico.
| | - Rodrigo Munguia
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara 44430, Mexico.
| | - Edmundo Guerra
- Department of Automatic Control, Technical University of Catalonia UPC, 08034 Barcelona, Spain.
| | - Antoni Grau
- Department of Automatic Control, Technical University of Catalonia UPC, 08034 Barcelona, Spain.
| |
Collapse
|
13
|
Monocular SLAM System for MAVs Aided with Altitude and Range Measurements: a GPS-free Approach. J INTELL ROBOT SYST 2018. [DOI: 10.1007/s10846-018-0775-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
14
|
Recchiuto CT, Sgorbissa A. Post-disaster assessment with unmanned aerial vehicles: A survey on practical implementations and research approaches. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21756] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
15
|
Cheng L, Wu CD, Zhang YZ, Wang Y. An Indoor Localization Strategy for a Mini-UAV in the Presence of Obstacles. INT J ADV ROBOT SYST 2017. [DOI: 10.5772/52754] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
In this paper, we propose a novel approach to mini-UAV localization in a wireless sensor network. We firstly employ the environment adaptive RSS parameters' estimation method to estimate the parameters of range estimation model. However, the direct path from the target to a beacon is blocked by obstacles in a complicated indoor environment. So the proposed method, which employs a sequential probability ratio test to identify whether the measurement contains non-line of sight (NLOS) errors, is tolerant to parameter fluctuations. Finally, a particle swarm optimization-based method is proposed to solve the established objective function. Simulation results show that the proposed method achieved relatively higher localization accuracy. In addition, the performance analyses, carried out for a realistic indoor environment, shows that the proposed method still preserves the same localization accuracy.
Collapse
Affiliation(s)
- Long Cheng
- College of Information Science and Engineering, Northeastern University, Shenyang, China
| | - Cheng-Dong Wu
- College of Information Science and Engineering, Northeastern University, Shenyang, China
| | - Yun-Zhou Zhang
- College of Information Science and Engineering, Northeastern University, Shenyang, China
| | - Yan Wang
- College of Information Science and Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
16
|
Kanellakis C, Nikolakopoulos G. Survey on Computer Vision for UAVs: Current Developments and Trends. J INTELL ROBOT SYST 2017. [DOI: 10.1007/s10846-017-0483-z] [Citation(s) in RCA: 196] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
17
|
Munguia R, Urzua S, Grau A. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles. PLoS One 2016; 11:e0167197. [PMID: 28033385 PMCID: PMC5198979 DOI: 10.1371/journal.pone.0167197] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2016] [Accepted: 11/10/2016] [Indexed: 11/18/2022] Open
Abstract
In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.
Collapse
Affiliation(s)
- Rodrigo Munguia
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara, México
- * E-mail: (RM); (AG)
| | - Sarquis Urzua
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara, México
| | - Antoni Grau
- Automatic Control Dept, Technical University of Catalonia, 08034 Barcelona, Spain
- * E-mail: (RM); (AG)
| |
Collapse
|
18
|
Ten Harmsel AJ, Olson IJ, Atkins EM. Emergency Flight Planning for an Energy-Constrained Multicopter. J INTELL ROBOT SYST 2016. [DOI: 10.1007/s10846-016-0370-z] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
19
|
Vision-Based SLAM System for Unmanned Aerial Vehicles. SENSORS 2016; 16:s16030372. [PMID: 26999131 PMCID: PMC4813947 DOI: 10.3390/s16030372] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 03/07/2016] [Accepted: 03/09/2016] [Indexed: 11/24/2022]
Abstract
The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.
Collapse
|
20
|
Abstract
SUMMARYIn this work, we investigate a quaternion-based formulation of 3D Simultaneous Localization and Mapping with Extended Kalman Filter (EKF-SLAM) using relative pose measurements. We introduce a discrete-time derivation that avoids thenormalization problemthat often arises when using unit quaternions in Kalman filter and we study its observability properties. The consistency of the estimation errors with the corresponding covariance matrices is also evaluated. The approach is further tested on real data from theRawseeds datasetand it is applied within a delayed-state EKF architecture for estimating a dense 3D map of an unknown environment. The contribution is motivated by the possibility of abstracting multi-sensorial information in terms of relative pose measurements and for its straightforward extensions to the multi robot case.
Collapse
|
21
|
Abstract
SUMMARYA robot mapping procedure using a modified speeded-up robust feature (SURF) is proposed for building persistent maps with visual landmarks in robot simultaneous localization and mapping (SLAM). SURFs are scale-invariant features that automatically recover the scale and orientation of image features in different scenes. However, the SURF method is not originally designed for applications in dynamic environments. The repeatability of the detected SURFs will be reduced owing to the dynamic effect. This study investigated and modified SURF algorithms to improve robustness in representing visual landmarks in robot SLAM systems. Many modifications of the SURF algorithms are proposed in this study including the orientation representation of features, the vector dimension of feature description, and the number of detected features in an image. The concept of sparse representation is also used to describe the environmental map and to reduce the computational complexity when using extended Kalman filter (EKF) for state estimation. Effective procedures of data association and map management for SURFs in SLAM are also designed to improve accuracy in robot state estimation. Experimental works were performed on an actual system with binocular vision sensors to validate the feasibility and effectiveness of the proposed algorithms. The experimental examples include the evaluation of state estimation using EKF SLAM and the implementation of indoor SLAM. In the experiments, the performance of the modified SURF algorithms was compared with the original SURF algorithms. The experimental results confirm that the modified SURF provides better repeatability and better robustness for representing the landmarks in visual SLAM systems.
Collapse
|
22
|
Weiss S, Achtelik MW, Lynen S, Achtelik MC, Kneip L, Chli M, Siegwart R. Monocular Vision for Long-term Micro Aerial Vehicle State Estimation: A Compendium. J FIELD ROBOT 2013. [DOI: 10.1002/rob.21466] [Citation(s) in RCA: 181] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
23
|
Munguía R, Castillo-Toledo B, Grau A. A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system. SENSORS (BASEL, SWITZERLAND) 2013; 13:8501-22. [PMID: 23823972 PMCID: PMC3758607 DOI: 10.3390/s130708501] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2013] [Revised: 05/27/2013] [Accepted: 06/19/2013] [Indexed: 11/24/2022]
Abstract
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.
Collapse
Affiliation(s)
- Rodrigo Munguía
- Department of Computer Science, CUCEI, University of Guadalajara, Av. Revolución 1500 Modulo “O” Col. Olimpica, Guadalajara 44830, Jalisco, Mexico
| | - Bernardino Castillo-Toledo
- Center for Research and Advanced Studies, CINVESTAV, Unidad Guadalajara, Av. del Bosque 1145, Col. El Bajío, Zapopan 45015, Jalisco, Mexico; E-Mail:
| | - Antoni Grau
- Department of Automatic Control, Technical University of Catalonia, C. Pau Gargallo 5 Campus Diagonal Sud Edifici U., Barcelona 08028, Spain; E-Mail:
| |
Collapse
|
24
|
Lamberti F, Sanna A, Paravati G, Montuschi P, Gatteschi V, Demartini C. Mixed Marker-Based/Marker-Less Visual Odometry System for Mobile Robots. INT J ADV ROBOT SYST 2013. [DOI: 10.5772/56577] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Abstract When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision-based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision-based odometry algorithm, which is capable of estimating the relative frame-to-frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off-the-shelf quadrotor via extensive experimental tests.
Collapse
Affiliation(s)
- Fabrizio Lamberti
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Torino, Italy
| | - Andrea Sanna
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Torino, Italy
| | - Gianluca Paravati
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Torino, Italy
| | - Paolo Montuschi
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Torino, Italy
| | - Valentina Gatteschi
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Torino, Italy
| | - Claudio Demartini
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Torino, Italy
| |
Collapse
|
25
|
Shi Y, Ji S, Shi Z, Duan Y, Shibasaki R. GPS-supported visual SLAM with a rigorous sensor model for a panoramic camera in outdoor environments. SENSORS 2012; 13:119-36. [PMID: 23344377 PMCID: PMC3574668 DOI: 10.3390/s130100119] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2012] [Revised: 11/28/2012] [Accepted: 12/18/2012] [Indexed: 12/01/2022]
Abstract
Accurate localization of moving sensors is essential for many fields, such as robot navigation and urban mapping. In this paper, we present a framework for GPS-supported visual Simultaneous Localization and Mapping with Bundle Adjustment (BA-SLAM) using a rigorous sensor model in a panoramic camera. The rigorous model does not cause system errors, thus representing an improvement over the widely used ideal sensor model. The proposed SLAM does not require additional restrictions, such as loop closing, or additional sensors, such as expensive inertial measurement units. In this paper, the problems of the ideal sensor model for a panoramic camera are analysed, and a rigorous sensor model is established. GPS data are then introduced for global optimization and georeferencing. Using the rigorous sensor model with the geometric observation equations of BA, a GPS-supported BA-SLAM approach that combines ray observations and GPS observations is then established. Finally, our method is applied to a set of vehicle-borne panoramic images captured from a campus environment, and several ground control points (GCP) are used to check the localization accuracy. The results demonstrated that our method can reach an accuracy of several centimetres.
Collapse
Affiliation(s)
- Yun Shi
- Center for Spatial Information Science (CSIS), University of Tokyo, Chiba 277-8568, Japan; E-Mails: (Y.S.); (Y.D.); (R.S.)
| | - Shunping Ji
- School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
- Author to whom correspondence should be addressed; E-Mail: ; Tel.: +86-135-5405-7323; Fax: +86-27-6877-8086
| | - Zhongchao Shi
- Department of Environmental and Information Studies, Tokyo City University, Yokohama 222-0033, Japan; E-Mail:
| | - Yulin Duan
- Center for Spatial Information Science (CSIS), University of Tokyo, Chiba 277-8568, Japan; E-Mails: (Y.S.); (Y.D.); (R.S.)
| | - Ryosuke Shibasaki
- Center for Spatial Information Science (CSIS), University of Tokyo, Chiba 277-8568, Japan; E-Mails: (Y.S.); (Y.D.); (R.S.)
| |
Collapse
|
26
|
|
27
|
Affiliation(s)
- Cedric Cocaud
- a Department of Electrical Engineering , University of Tokyo , ISAS campus 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa , 252-5210 , Japan
| | - Takashi Kubota
- b Institute of Space and Astronautical Science (ISAS-JAXA) , ISAS Campus 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa , 252-5210 , Japan
| |
Collapse
|
28
|
Eynard D, Vasseur P, Demonceaux C, Frémont V. Real time UAV altitude, attitude and motion estimation from hybrid stereovision. Auton Robots 2012. [DOI: 10.1007/s10514-012-9285-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
29
|
Kendoul F. Survey of advances in guidance, navigation, and control of unmanned rotorcraft systems. J FIELD ROBOT 2012. [DOI: 10.1002/rob.20414] [Citation(s) in RCA: 468] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
30
|
Cocaud C, Kubota T. Development of an Intelligent Simulator with SLAM Functions for Visual Autonomous Landing on Small Celestial Bodies. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2011. [DOI: 10.20965/jaciii.2011.p1167] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
As space agencies are currently looking at Near Earth Asteroids as a next step on their exploration roadmap, high precision autonomous landing control schemes will be required for upcoming missions. In this paper, an intelligent simulator is proposed to reproduce all of the visual and dynamic aspects required to test an autonomous Simultaneous Localization and Mapping (SLAM) system. The proposed simulator provides position and attitude information to a spacecraft during its approach descent and Landing phase toward the surface of an asteroid or other small celestial bodies. Because the SLAM system makes use of navigation cameras and a range sensor moving with the spacecraft as it approaches the surface, the simulator is also developed to reproduce a fully integrated 3D environment using computer graphics technology that mimics the noise, image detail and real-time performances of the navigation cameras and the range sensors. This paper describes the architecture and capability of the developed simulator and the SLAM system for which it is designed. The developed simulator is evaluated by using the specifications of the onboard sensors used in the Hayabusa spacecraft sent by JAXA/ISAS to the Itokawa asteroid in 2003.
Collapse
|
31
|
Weiss S, Scaramuzza D, Siegwart R. Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-denied environments. J FIELD ROBOT 2011. [DOI: 10.1002/rob.20412] [Citation(s) in RCA: 300] [Impact Index Per Article: 23.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
32
|
Suzuki T, Amano Y, Hashizume T, Suzuki S. 3D Terrain Reconstruction by Small Unmanned Aerial Vehicle Using SIFT-Based Monocular SLAM. JOURNAL OF ROBOTICS AND MECHATRONICS 2011. [DOI: 10.20965/jrm.2011.p0292] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper describes a Simultaneous Localization And Mapping (SLAM) algorithm using a monocular camera for a small Unmanned Aerial Vehicle (UAV). A small UAV has attracted the attention for effective means of the collecting aerial information. However, there are few practical applications due to its small payloads for the 3D measurement. We propose extended Kalman filter SLAM to increase UAV position and attitude data and to construct 3D terrain maps using a small monocular camera. We propose 3D measurement based on Scale-Invariant Feature Transform (SIFT) triangulation features extracted from captured images. Field-experiment results show that our proposal effectively estimates position and attitude of the UAV and construct the 3D terrain map.
Collapse
|
33
|
|