1
|
Keshavan J, Humbert JS. An Analytically Stable Structure and Motion Observer Based on Monocular Vision. J INTELL ROBOT SYST 2017. [DOI: 10.1007/s10846-017-0470-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
2
|
Yuan X, Martínez-Ortega JF, Fernández JAS, Eckert M. AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation. SENSORS 2017; 17:s17051174. [PMID: 28531135 PMCID: PMC5470919 DOI: 10.3390/s17051174] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Revised: 05/08/2017] [Accepted: 05/19/2017] [Indexed: 11/16/2022]
Abstract
In this work, we focus on key topics related to underwater Simultaneous Localization and Mapping (SLAM) applications. Moreover, a detailed review of major studies in the literature and our proposed solutions for addressing the problem are presented. The main goal of this paper is the enhancement of the accuracy and robustness of the SLAM-based navigation problem for underwater robotics with low computational costs. Therefore, we present a new method called AEKF-SLAM that employs an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-based SLAM approach stores the robot poses and map landmarks in a single state vector, while estimating the state parameters via a recursive and iterative estimation-update process. Hereby, the prediction and update state (which exist as well in the conventional EKF) are complemented by a newly proposed augmentation stage. Applied to underwater robot navigation, the AEKF-SLAM has been compared with the classic and popular FastSLAM 2.0 algorithm. Concerning the dense loop mapping and line mapping experiments, it shows much better performances in map management with respect to landmark addition and removal, which avoid the long-term accumulation of errors and clutters in the created map. Additionally, the underwater robot achieves more precise and efficient self-localization and a mapping of the surrounding landmarks with much lower processing times. Altogether, the presented AEKF-SLAM method achieves reliably map revisiting, and consistent map upgrading on loop closure.
Collapse
Affiliation(s)
- Xin Yuan
- Centro de Investigación en Tecnologías Software y Sistemas para la Sostenibilidad (CITSEM), Campus Sur, Universidad Politécnica de Madrid (UPM), Madrid 28031, Spain.
| | - José-Fernán Martínez-Ortega
- Centro de Investigación en Tecnologías Software y Sistemas para la Sostenibilidad (CITSEM), Campus Sur, Universidad Politécnica de Madrid (UPM), Madrid 28031, Spain.
| | - José Antonio Sánchez Fernández
- Centro de Investigación en Tecnologías Software y Sistemas para la Sostenibilidad (CITSEM), Campus Sur, Universidad Politécnica de Madrid (UPM), Madrid 28031, Spain.
| | - Martina Eckert
- Centro de Investigación en Tecnologías Software y Sistemas para la Sostenibilidad (CITSEM), Campus Sur, Universidad Politécnica de Madrid (UPM), Madrid 28031, Spain.
| |
Collapse
|
3
|
An Optical Flow-Based Solution to the Problem of Range Identification in Perspective Vision Systems. J INTELL ROBOT SYST 2017. [DOI: 10.1007/s10846-016-0404-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
4
|
Leonard JJ, Rikoski RJ, Newman PM, Bosse M. Mapping Partially Observable Features from Multiple Uncertain Vantage Points. Int J Rob Res 2016. [DOI: 10.1177/0278364902021010889] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In this paper we present a technique for mapping partially observable features from multiple uncertain vantage points. The problem of concurrent mapping and localization (CML) is stated as follows. Starting from an initial known position, a mobile robot travels through a sequence of positions, obtaining a set of sensor measurements at each position. The goal is to process the sensor data to produce an estimate of the trajectory of the robot while concurrently building a map of the environment. In this paper, we describe a generalized framework for CML that incorporates temporal as well as spatial correlations. The representation is expanded to incorporate past vehicle positions in the state vector. Estimates of the correlations between current and previous vehicle states are explicitly maintained. This enables the consistent initialization of map features using data from multiple time steps. Updates to the map and the vehicle trajectory can also be performed in batches of data acquired from multiple vantage points. The method is illustrated with sonar data from a testing tank and via experiments with a B21 land mobile robot, demonstrating the ability to perform CML with sparse and ambiguous data.
Collapse
Affiliation(s)
- John J. Leonard
- MIT Department of Ocean Engineering Cambridge, MA 02139, USA
| | | | - Paul M. Newman
- MIT Department of Ocean Engineering Cambridge, MA 02139, USA
| | - Michael Bosse
- MIT Department of Ocean Engineering Cambridge, MA 02139, USA
| |
Collapse
|
5
|
Vandewouw MM, Aleman DM, Jaffray DA. Robotic path-finding in inverse treatment planning for stereotactic radiosurgery with continuous dose delivery. Med Phys 2016; 43:4545. [PMID: 27487871 DOI: 10.1118/1.4955177] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Continuous dose delivery in radiation therapy treatments has been shown to decrease total treatment time while improving the dose conformity and distribution homogeneity over the conventional step-and-shoot approach. The authors develop an inverse treatment planning method for Gamma Knife® Perfexion™ that continuously delivers dose along a path in the target. METHODS The authors' method is comprised of two steps: find a path within the target, then solve a mixed integer optimization model to find the optimal collimator configurations and durations along the selected path. Robotic path-finding techniques, specifically, simultaneous localization and mapping (SLAM) using an extended Kalman filter, are used to obtain a path that travels sufficiently close to selected isocentre locations. SLAM is novelly extended to explore a 3D, discrete environment, which is the target discretized into voxels. Further novel extensions are incorporated into the steering mechanism to account for target geometry. RESULTS The SLAM method was tested on seven clinical cases and compared to clinical, Hamiltonian path continuous delivery, and inverse step-and-shoot treatment plans. The SLAM approach improved dose metrics compared to the clinical plans and Hamiltonian path continuous delivery plans. Beam-on times improved over clinical plans, and had mixed performance compared to Hamiltonian path continuous plans. The SLAM method is also shown to be robust to path selection inaccuracies, isocentre selection, and dose distribution. CONCLUSIONS The SLAM method for continuous delivery provides decreased total treatment time and increased treatment quality compared to both clinical and inverse step-and-shoot plans, and outperforms existing path methods in treatment quality. It also accounts for uncertainty in treatment planning by accommodating inaccuracies.
Collapse
Affiliation(s)
- Marlee M Vandewouw
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Ontario M5S 3G8, Canada
| | - Dionne M Aleman
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Ontario M5S 3G8, Canada
| | - David A Jaffray
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9, Canada
| |
Collapse
|
6
|
An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation. SENSORS 2016; 16:s16071148. [PMID: 27455279 PMCID: PMC4970190 DOI: 10.3390/s16071148] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Revised: 07/07/2016] [Accepted: 07/19/2016] [Indexed: 11/20/2022]
Abstract
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments.
Collapse
|
7
|
Abstract
This article describes a framework for vision and motion plan ning for a mobile robot. The robot's task is to reach the desti nation in the minimum time while detecting possible routes by vision. Since visual recognition is computationally expensive and the recognition result includes uncertainty, a trade-off must be considered between the cost of visual recognition and the effect of information to be obtained by recognition. Using a probabilistic model of the uncertainty of the recognition result, vision-motion planning is formulated as a recurrence formula. With this formulation, the optimal sequence of observation points is recursively determined. A generated plan is globally optimal, because the planner minimizes the total cost. An effi cient solution strategy is also described that employs a pruning method based on the lower bound of the total cost calculated by assuming perfect sensor information. Simulation results and experiments with an actual mobile robot demonstrate the feasibility of our approach.
Collapse
Affiliation(s)
- Jun Miura
- Department of Mechanical Engineering for Computer-Controlled Machinery Osaka University Suita, Osaka 565, Japan
| | - Yoshiaki Shirai
- Department of Mechanical Engineering for Computer-Controlled Machinery Osaka University Suita, Osaka 565, Japan
| |
Collapse
|
8
|
Dellaert F, Kaess M. Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing. Int J Rob Res 2016. [DOI: 10.1177/0278364906072768] [Citation(s) in RCA: 558] [Impact Index Per Article: 69.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
Collapse
Affiliation(s)
- Frank Dellaert
- Center for Robotics and Intelligent Machines, College of Computing, Georgia Institute of Technology, Atlanta, GA 30332-0280,
| | - Michael Kaess
- Center for Robotics and Intelligent Machines, College of Computing, Georgia Institute of Technology, Atlanta, GA 30332-0280,
| |
Collapse
|
9
|
Tardós JD, Neira J, Newman PM, Leonard JJ. Robust Mapping and Localization in Indoor Environments Using Sonar Data. Int J Rob Res 2016. [DOI: 10.1177/027836402320556340] [Citation(s) in RCA: 346] [Impact Index Per Article: 43.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper we describe a new technique for the creation of feature-based stochastic maps using standard Polaroid sonar sensors. The fundamental contributions of our proposal are: (1) a perceptual grouping process that permits the robust identification and localization of environmental features, such as straight segments and corners, from the sparse and noisy sonar data; (2) a map joining technique that allows the system to build a sequence of independent limited-size stochastic maps and join them in a globally consistent way; (3) a robust mechanism to determine which features in a stochastic map correspond to the same environment feature, allowing the system to update the stochastic map accordingly, and perform tasks such as revisiting and loop closing. We demonstrate the practicality of this approach by building a geometric map of a medium size, real indoor environment, with several people moving around the robot. Maps built from laser data for the same experiment are provided for comparison.
Collapse
Affiliation(s)
- Juan D. Tardós
- Dept. Informática e Ingeniería de Sistemas, Universidad de Zaragoza María de Luna 3 E-50018 Zaragoza, Spain,
| | - José Neira
- Dept. Informática e Ingeniería de Sistemas, Universidad de Zaragoza María de Luna 3 E-50018 Zaragoza, Spain,
| | - Paul M. Newman
- MIT Dept. of Ocean Engineering 77 Massachusetts Avenue Cambridge, MA 02139-4307 USA,
| | - John J. Leonard
- MIT Dept. of Ocean Engineering 77 Massachusetts Avenue Cambridge, MA 02139-4307 USA,
| |
Collapse
|
10
|
Abstract
This article describes a system to incrementally build a world model with a mobile robot in an unknown environment. The model is, for the moment, segment based. A trinocular stereo system is used to build a local map about the environment. A global map is obtained by integrating a sequence of stereo frames taken when the robot navigates in the environment. The emphasis of this article is on the representation of the uncer tainty of 3D segments from stereo and on the integration of segments from multiple views. The proposed representation is simple and very convenient to characterize the uncertainty of segment. A Kalman filter is used to merge matched line seg ments. An important characteristic of our integration strategy is that a segment observed by the stereo system corresponds only to one part of the segment in space, so the union of the different observations gives a better estimate on the segment in space. We have succeeded in integrating 35 stereo frames taken in our robot room.
Collapse
|
11
|
Abstract
This article presents an algorithm for autonomous map building and maintenance for a mobile robot. We believe that mobile robot navigation can be treated as a problem of tracking ge ometric features that occur naturally in the environment. We represent each feature in the map by a location estimate (the feature state vector) and two distinct measures of uncertainty: a covariance matrix to represent uncertainty in feature loca tion, and a credibility measure to represent our belief in the validity of the feature. During each position update cycle, pre dicted measurements are generated for each geometric feature in the map and compared with actual sensor observations. Suc cessful matches cause a feature's credibility to be increased. Unpredicted observations are used to initialize new geometric features, while unobserved predictions result in a geometric feature's credibility being decreased. We describe experimental results obtained with the algorithm that demonstrate successful map building using real sonar data.
Collapse
Affiliation(s)
- John J. Leonard
- Department of Engineering Science University of Oxford Parks Road, Oxford OX1 3PJ England
| | - Hugh F. Durrant-Whyte
- Department of Engineering Science University of Oxford Parks Road, Oxford OX1 3PJ England
| | | |
Collapse
|
12
|
Abstract
This paper proposes a simple method for estimating the position of a robot from relatively few sensor readings. Our algorithms are intended for applications where sensor readings are expensive or otherwise limited, and the readings that are taken are subject to con siderable errors or noise. This method exhibits faster convergence with fewer measurements and greater accuracy than that exhibited by the discrete Kalman filter in this type of application. Our approach is validated with a mobile robot, on which a camera is used to obtain bearing information with respect to landmarks in the environment.
Collapse
Affiliation(s)
- Daniel L. Boley
- Department of Computer Science and Engineering University of Minnesota 200 Union Street SE Minneapolis, MN 55455, USA
| | - Karen T. Sutherland
- Department of Computer Science University of Wisconsin, La Crosse 1725 State Street La Crosse, WI 54601, USA
| |
Collapse
|
13
|
Abstract
In this paper we present a novel feature initialization technique for the Simultaneous Localization and Mapping (SLAM) algorithm. The initialization scheme extends previous approaches for identifying new confirmed features and is shown to improve the steady-state performance of the filter by incorporating tentative features into the filter as soon as they are observed. Constraints are then applied between multiple feature estimates when a feature is confirmed. Observations that are subsequently deemed as spurious are removed from the state vector after an appropriate timeout. It is shown that information that would otherwise be lost can therefore be used consistently in the filter. Results of this algorithm applied to data collected using a submersible vehicle are also shown.
Collapse
|
14
|
Chong KS, Kleeman L. Mobile-Robot Map Building from an Advanced Sonar Array and Accurate Odometry. Int J Rob Res 2016. [DOI: 10.1177/027836499901800102] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper describes a mobile robot equipped with a sonar sensor array in a guided, feature-based map-building task in an indoor environment. The landmarks common to indoor environments areplanes, corners, and edges, and these are located and classified with the sonar sensor array The map-building process makes use of accurate odometry information that is derivedfiom apair ofknife-edged unloaded encoder wheels. Discrete sonar observations are incrementally merged into partial planes to produce a realistic representation of the environment that is amenable to sonar localization. Collinearity constraints among featurs ar exploited to enhance both the map-featwe estimation and robot localization. The map update employs an iterated extended Kalmanfilter in the first implementation and subsequently a comparison is made with the Julier-Uhlmann-Durrant-Whyte Kalman filter which improves the accuracy of covariance propagation when nonlinear equations are involved. The map accounts for correlation among features and robot positions. Partial planes am also used to eliminate phantom targets caused by specular reflection of the sonar. Unclassifiable sonar targets are integrated into the map for the purpose of obstacle avoidance. The paper presents simulated and experimental data.
Collapse
Affiliation(s)
- Kok Seng Chong
- Institute of Microelectronics, 11 Science Park Road, Singapore Science Park II, 117685, Singapore
| | - Lindsay Kleeman
- Intelligent Robotics Research Centre, Department of Electrical and Computer Systems Engineering, Monash University, Clayton, Victoria 3168, Australia
| |
Collapse
|
15
|
Gao Z, Wang P, Zhai R, Tang Y. Frontally placed eyes versus laterally placed eyes: computational comparison of their functions for ego-motion estimation. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2016; 33:501-507. [PMID: 27140756 DOI: 10.1364/josaa.33.000501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Both frontally placed eyes and laterally placed eyes are popular in nature, and although which one is better could be one of the most intuitive questions to ask, it could also be the hardest question to answer. Their most obvious difference is that, at least as supposed in the computer vision community, stereopsis plays the central role in the visual system composed of frontally placed eyes (or cameras); however, it is not available in the lateral configuration due to the lack of overlap between the visual fields. As a result, researchers have adopted completely different approaches to model the two configurations and developed computational mimics of them to address various vision problems. Recently, the advent of novel quasi-parallax conception unifies the ego-motion estimation procedure of these two eye configurations into the same framework and makes systematic comparison feasible. In this paper, we intend to establish the computational superiority of eye topography from the perspective of ego-motion estimation. Specifically, quasi-parallax is applied to fuse motion cues from individual cameras at an early stage, at the pixel level, and to recover the translation and rotation separately with high accuracy and efficiency without the need of feature matching. Furthermore, its applicability on the extended sideways arrangements is studied successfully to make our comparison more general and insightful. Extensive experiments on both synthetic and real data have been done, and the computational superiority of the lateral configuration is verified.
Collapse
|
16
|
Cupec R, Nyarko EK, Filko D, Kitanov A, Petrović I. Place recognition based on matching of planar surfaces and line segments. Int J Rob Res 2015. [DOI: 10.1177/0278364914548708] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper considers the potential of using three-dimensional (3D) planar surfaces and line segments detected in depth images for place recognition. A place recognition method is presented that is based on matching sets of surface and line features extracted from depth images provided by a 3D camera to features of the same type contained in a previously created environment model. The considered environment model consists of a set of local models representing particular locations in the modeled environment. Each local model consists of planar surface segments and line segments representing the edges of objects in the environment. The presented method is designed for indoor and urban environments. A computationally efficient pose hypothesis generation approach is proposed that ranks the features according to their potential contribution to the pose information, thereby reducing the time needed for obtaining accurate pose estimation. Furthermore, a robust probabilistic method for selecting the best pose hypothesis is proposed that allows matching of partially overlapping point clouds with gross outliers. The proposed approach is experimentally tested on a benchmark dataset containing depth images acquired in the indoor environment with changes in lighting conditions and the presence of moving objects. A comparison of the proposed method to FAB-MAP and DLoopDetector is reported.
Collapse
Affiliation(s)
- Robert Cupec
- Faculty of Electrical Engineering, J. J. Strossmayer University of Osijek, Osijek, Croatia
| | - Emmanuel Karlo Nyarko
- Faculty of Electrical Engineering, J. J. Strossmayer University of Osijek, Osijek, Croatia
| | - Damir Filko
- Faculty of Electrical Engineering, J. J. Strossmayer University of Osijek, Osijek, Croatia
| | - Andrej Kitanov
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Ivan Petrović
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
17
|
Autonomous Simultaneous Localization and Mapping driven by Monte Carlo uncertainty maps-based navigation. KNOWL ENG REV 2012. [DOI: 10.1017/s0269888912000276] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractThis paper addresses the problem of implementing a Simultaneous Localization and Mapping (SLAM) algorithm combined with a non-reactive controller (such as trajectory following or path following). A general study showing the advantages of using predictors to avoid mapping inconsistences in autonomous SLAM architectures is presented. In addition, this paper presents a priority-based uncertainty map construction method of the environment by a mobile robot when executing a SLAM algorithm. The SLAM algorithm is implemented with an extended Kalman filter (EKF) and extracts corners (convex and concave) and lines (associated with walls) from the surrounding environment. A navigation approach directs the robot motion to the regions of the environment with the higher uncertainty and the higher priority. The uncertainty of a region is specified by a probability characterization computed at the corresponding representative points. These points are obtained by a Monte Carlo experiment and their probability is estimated by the sum of Gaussians method, avoiding the time-consuming map-gridding procedure. The priority is determined by the frame in which the uncertainty region was detected (either local or global to the vehicle's pose). The mobile robot has a non-reactive trajectory following controller implemented on it to drive the vehicle to the uncertainty points. SLAM real-time experiments in real environment, navigation examples, uncertainty maps constructions along with algorithm strategies and architectures are also included in this work.
Collapse
|
18
|
Martinez AM, Vitria J. Clustering in image space for place recognition and visual annotations for human-robot interaction. ACTA ACUST UNITED AC 2012; 31:669-82. [PMID: 18244832 DOI: 10.1109/3477.956029] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The most classical way of attempting to solve the vision-guided navigation problem for autonomous robots corresponds to the use of three-dimensional (3-D) geometrical descriptions of the scene; what is known as model-based approaches. However, these approaches do not facilitate the user's task because they require that geometrically precise models of the 3-D environment be given by the user. In this paper, we propose the use of "annotations" posted on some type of blackboard or "descriptive" map to facilitate this user-robot interaction. We show that, by using this technique, user commands can be as simple as "go to label 5." To build such a mechanism, new approaches for vision-guided mobile robot navigation have to be found. We show that this can be achieved by using mixture models within an appearance-based paradigm. Mixture models are more useful in practice than other pattern recognition methods such as principal component analysis (PCA) or Fisher discriminant analysis (FDA)-also known as linear discriminant analysis (LDA), because they can represent nonlinear subspaces. However, given the fact that mixture models are usually learned using the expectation-maximization (EM) algorithm which is a gradient ascent technique, the system cannot always converge to a desired final solution, due to the local maxima problem. To resolve this, a genetic version of the EM algorithm is used. We then show the capabilities of this latest approach on a navigation task that uses the above described "annotations."
Collapse
Affiliation(s)
- A M Martinez
- Robot Vision Lab., Purdue Univ., West Lafayette, IN
| | | |
Collapse
|
19
|
|
20
|
Jin TS, Lee JM, Hashimoto H. Position estimation of a mobile robot using images of a moving target in intelligent space with distributed sensors. Adv Robot 2012. [DOI: 10.1163/156855306777361604] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
21
|
Moon I, Miura J, Shirai Y. On-line extraction of stable visual landmarks for a mobile robot with stereo vision. Adv Robot 2012. [DOI: 10.1163/15685530260425701] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
22
|
Lee YC, Lim JH, Cho DW, Chung WK. Sonar Map Construction for Autonomous Mobile Robots Using a Data Association Filter. Adv Robot 2012. [DOI: 10.1163/156855308x392735] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Yu-Cheol Lee
- a U-Robot Research Division, ETRI, 138 Gajeongno, Yuseong-gu, Daejeon 305-700, South Korea
| | - Jong Hwan Lim
- b Major of Mechatronics Engineering, Cheju National University, 1 Ara-dong, Jeju 690-756, South Korea
| | - Dong-Woo Cho
- c Department of Mechanical Engineering, POSTECH, San 31 Hyoja-dong, Pohang 790-784, South Korea
| | - Wan Kyun Chung
- d Department of Mechanical Engineering, POSTECH, San 31 Hyoja-dong, Pohang 790-784, South Korea
| |
Collapse
|
23
|
Abstract
SUMMARYThis paper investigates 3-dimensional (3D) Simultaneous Localization and Mapping (SLAM) and the corresponding observability analysis by fusing data from landmark sensors and a strap-down Inertial Measurement Unit (IMU) in an adaptive Kalman filter (KF). In addition to the vehicle's states and landmark positions, the self-tuning filter estimates the IMU calibration parameters as well as the covariance of the measurement noise. The discrete-time covariance matrix of the process noise, the state transition matrix and the observation sensitivity matrix are derived in closed form, making it suitable for real-time implementation. Examination of the observability of the 3D SLAM system leads to the the conclusion that the system remains observable, provided that at least three known landmarks, which are not placed in a straight line, are observed.
Collapse
|
24
|
Cheein FAA, Lopez N, Soria CM, di Sciascio FA, Pereira FL, Carelli R. SLAM algorithm applied to robotics assistance for navigation in unknown environments. J Neuroeng Rehabil 2010; 7:10. [PMID: 20163735 PMCID: PMC2842281 DOI: 10.1186/1743-0003-7-10] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2009] [Accepted: 02/17/2010] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). METHODS In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. RESULTS The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. CONCLUSIONS The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.
Collapse
|
25
|
Curvature-based environment description for robot navigation using laser range sensors. SENSORS 2009; 9:5894-918. [PMID: 22461732 PMCID: PMC3315112 DOI: 10.3390/s90805894] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2009] [Revised: 06/22/2009] [Accepted: 06/24/2009] [Indexed: 11/25/2022]
Abstract
This work proposes a new feature detection and description approach for mobile robot navigation using 2D laser range sensors. The whole process consists of two main modules: a sensor data segmentation module and a feature detection and characterization module. The segmentation module is divided in two consecutive stages: First, the segmentation stage divides the laser scan into clusters of consecutive range readings using a distance-based criterion. Then, the second stage estimates the curvature function associated to each cluster and uses it to split it into a set of straight-line and curve segments. The curvature is calculated using a triangle-area representation where, contrary to previous approaches, the triangle side lengths at each range reading are adapted to the local variations of the laser scan, removing noise without missing relevant points. This representation remains unchanged in translation or rotation, and it is also robust against noise. Thus, it is able to provide the same segmentation results although the scene will be perceived from different viewpoints. Therefore, segmentation results are used to characterize the environment using line and curve segments, real and virtual corners and edges. Real scan data collected from different environments by using different platforms are used in the experiments in order to evaluate the proposed environment description algorithm.
Collapse
|
26
|
Holmes SA, Klein G, Murray DW. An O(N(2)) square root unscented Kalman Filter for visual simultaneous localization and mapping. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2009; 31:1251-1263. [PMID: 19443923 DOI: 10.1109/tpami.2008.189] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
This paper develops a Square Root Unscented Kalman Filter (SRUKF) for performing video-rate visual simultaneous localization and mapping (SLAM) using a single camera. The conventional UKF has been proposed previously for SLAM, improving the handling of nonlinearities compared with the more widely used Extended Kalman Filter (EKF). However, no account was taken of the comparative complexity of the algorithms: In SLAM, the UKF scales as O(N;{3}) in the state length, compared to the EKF's O(N;{2}), making it unsuitable for video-rate applications with other than unrealistically few scene points. Here, it is shown that the SRUKF provides the same results as the UKF to within machine accuracy and that it can be reposed with complexity O(N;{2}) for state estimation in visual SLAM. This paper presents results from video-rate experiments on live imagery. Trials using synthesized data show that the consistency of the SRUKF is routinely better than that of the EKF, but that its overall cost settles at an order of magnitude greater than the EKF for large scenes.
Collapse
Affiliation(s)
- Steven A Holmes
- Department of Engineering Science, University of Oxford, Oxford OX1 3PJ, UK.
| | | | | |
Collapse
|
27
|
Cheein FA, Scaglia G, di Sciasio F, Carelli R. Feature Selection Criteria for Real Time EKF-SLAM Algorithm. INT J ADV ROBOT SYST 2009. [DOI: 10.5772/7237] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
This paper presents a seletion procedure for environmet features for the correction stage of a SLAM (Simultaneous Localization and Mapping) algorithm based on an Extended Kalman Filter (EKF). This approach decreases the computational time of the correction stage which allows for real and constant-time implementations of the SLAM. The selection procedure consists in chosing the features the SLAM system state covariance is more sensible to. The entire system is implemented on a mobile robot equipped with a range sensor laser. The features extracted from the environment correspond to lines and corners. Experimental results of the real time SLAM algorithm and an analysis of the processing-time consumed by the SLAM with the feature selection procedure proposed are shown. A comparison between the feature selection approach proposed and the classical sequential EKF-SLAM along with an entropy feature selection approach is also performed.
Collapse
Affiliation(s)
| | - Gustavo Scaglia
- Instituto de Automatica, National University of San Juan, San Juan, Argentina
| | - Fernando di Sciasio
- Instituto de Automatica, National University of San Juan, San Juan, Argentina
| | - Ricardo Carelli
- Instituto de Automatica, National University of San Juan, San Juan, Argentina
| |
Collapse
|
28
|
Jin T, Hashimoto H. Position Uncertainty Reduction of Mobile Robot Based on DINDs in Intelligent Space. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2008. [DOI: 10.20965/jaciii.2008.p0488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper proposes a localization of mobile robot using the images by distributed intelligent networked devices (DINDs) in intelligent space (ISpace). This scheme combines data from the observed position using dead-reckoning sensors and the estimated position using images of moving object, such as those of a walking human, used to determine the moving location of a mobile robot. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Using the a-priori known path of a moving object and a perspective camera model, the geometric constraint equations that represent the relation between image frame coordinates of a moving object and the estimated position of the robot are derived. The proposed approach is applied for a mobile robot in ISpace to show the reduction of uncertainty in the determining of the location of the mobile robot.
Collapse
|
29
|
Video data validation by sonar measures for robot localization and environment feature estimation. ROBOTICA 2008. [DOI: 10.1017/s026357470800502x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
30
|
Blanco JL, Fernandez-Madrigal JA, Gonzalez J. Toward a Unified Bayesian Approach to Hybrid Metric--Topological SLAM. IEEE T ROBOT 2008. [DOI: 10.1109/tro.2008.918049] [Citation(s) in RCA: 98] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
31
|
Chen S, Weng J. State-based SHOSLIF for indoor visual navigation. IEEE TRANSACTIONS ON NEURAL NETWORKS 2008; 11:1300-14. [PMID: 18249855 DOI: 10.1109/72.883430] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this paper, we investigate vision-based navigation using the self-organizing hierarchical optimal subspace learning and inference framework (SHOSLIF) that incorporates states and a visual attention mechanism. With states to keep the history information and regarding the incoming video input as an observation vector, the vision-based navigation is formulated as an observation-driven Markov model (ODMM). The ODMM can be realized through recursive partitioning regression. A stochastic recursive partition tree (SRPT), which maps an preprocessed current input raw image and the previous state into the current state and the next control signal, is used for efficient recursive partitioning regression. The SRPT learns incrementally: each learning sample is learned or rejected "on-the-fly." The purposed scheme has been successfully applied to indoor navigation.
Collapse
Affiliation(s)
- S Chen
- KLA Tencor, San Jose, CA 95134, USA
| | | |
Collapse
|
32
|
SLAM in Indoor Environments using Omni-directional Vertical and Horizontal Line Features. J INTELL ROBOT SYST 2007. [DOI: 10.1007/s10846-007-9179-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
33
|
|
34
|
Tien JM, Berg D. A calculus for services innovation. JOURNAL OF SYSTEMS SCIENCE AND SYSTEMS ENGINEERING 2007; 16:129-165. [PMID: 32288407 PMCID: PMC7104594 DOI: 10.1007/s11518-007-5041-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Innovation in the services area - especially in the electronic services (e-services) domain - can be systematically developed by first considering the strategic drivers and foci, then the tactical principles and enablers, and finally the operational decision attributes, all of which constitute a process or calculus of services innovation. More specifically, there are four customer drivers (i.e., collaboration, customization, integration and adaptation), three business foci (i.e., creation-focused, solution-focused and competition-focused), six business principles (i.e., reconstruct market boundaries, focus on the big picture not numbers, reach beyond existing demand, get strategic sequence right, overcome organizational hurdles and build execution into strategy), eight technical enablers (i.e., software algorithms, automation, telecommunication, collaboration, standardization, customization, organization, and globalization), and six attributes of decision informatics (i.e., decision-driven, information-based, real-time, continuously-adaptive, customer-centric and computationally-intensive). It should be noted that the four customer drivers are all directed at empowering the individual - that is, at recognizing that the individual can, respectively, contribute in a collaborative situation, receive customized or personalized attention, access an integrated system or process, and obtain adaptive real-time or just-in-time input. The developed process or calculus serves to identify the potential white spaces or blue oceans for innovation. In addition to expanding on current innovations in services and related experiences, white spaces are identified for possible future innovations; they include those that can mitigate the unforeseen consequences or abuses of earlier innovations, safeguard our rights to privacy, protect us from the always-on, interconnected world, provide us with an authoritative search engine, and generate a GDP metric that can adequately measure the growing knowledge economy, one driven by intangible ideas and services innovation.
Collapse
Affiliation(s)
- James M. Tien
- Department of Decision Sciences and Engineering Systems, Rensselaer Polytechnic Institute, Troy, New York USA
| | - Daniel Berg
- Department of Decision Sciences and Engineering Systems, Rensselaer Polytechnic Institute, Troy, New York USA
| |
Collapse
|
35
|
Services Innovation: Decision Attributes, Innovation Enablers, and Innovation Drivers. INTEGRATED SERIES IN INFORMATION SYSTEMS 2007. [PMCID: PMC7122330 DOI: 10.1007/978-0-387-46364-3_2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Innovation in the services area — especially in the electronic services (e-services) domain — can be characterized by six decision-oriented attributes: decision-driven, information-based, real-time, continuously-adaptive, customer-centric and computationally-intensive. These attributes constitute the decision informatics paradigm. In turn, decision informatics is supported by information and decision technologies and based on the disciplines of data fusion/analysis, decision modeling and systems engineering. Out of the nine major innovation enablers in the services area (i.e., decision informatics, software algorithms, automation, telecommunication, collaboration, standardization, customization, organization, and globalization), decision informatics is shown to be a necessary enabler. Furthermore, four innovation drivers (i.e., collaboration, customization, integration and adaptation) are identified; all four are directed at empowering the individual — that is, at recognizing that the individual can, respectively, contribute in a collaborative situation, receive customized or personalized attention, access an integrated system or process, and obtain adaptive real-time or just-in-time input. In addition to expanding on current innovations in services and experiences, white spaces are identified for possible future innovations; they include those that can mitigate the unforeseen consequences or abuses of earlier innovations, safeguard our rights to privacy, protect us from the always-on, interconnected world, provide us with an authoritative search engine, and generate a GDP metric that can adequately measure the growing knowledge economy, one driven by intangible ideas and services innovation.
Collapse
|
36
|
|
37
|
|
38
|
|
39
|
|
40
|
Tien J. Toward a decision informatics paradigm: a real-time, information-based approach to decision making. ACTA ACUST UNITED AC 2003. [DOI: 10.1109/tsmcc.2003.809345] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
41
|
Dissanayake M, Newman P, Clark S, Durrant-Whyte H, Csorba M. A solution to the simultaneous localization and map building (SLAM) problem. ACTA ACUST UNITED AC 2001. [DOI: 10.1109/70.938381] [Citation(s) in RCA: 1646] [Impact Index Per Article: 71.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
42
|
|
43
|
Neira J, Tardos J, Horn J, Schmidt G. Fusing range and intensity images for mobile robot localization. ACTA ACUST UNITED AC 1999. [DOI: 10.1109/70.744604] [Citation(s) in RCA: 61] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
44
|
Abstract
This paper presents an unconventional approach to vision-guided autonomous navigation. The system recalls information about scenes and navigational experience using content-based retrieval from a visual database. To achieve a high applicability to various road types, we do not impose a priori scene features, such as road edges, that the system must use, but rather, the system automatically derives features from images during supervised learning. To accomplish this, the system uses principle component analysis and linear discriminant analysis to automatically derive the most expressive features (MEF) for scene reconstruction or the most discriminating features (MDF) for scene classification. These features best describe or classify the population of the scenes and approximate complex decision regions using piecewise linear boundaries up to a desired accuracy. A new self-organizing scheme called recursive partition tree (RPT) is used for automatic construction of a vision-and-control database, which quickly prunes the data set in the content-based search and results in a low time complexity of O(log(n)) for retrieval from a database of size n. The system combines principle component and linear discriminant analysis networks with a decision tree network. It has been tested on a mobile robot, Rome, in an unknown indoor environment to learn scenes and the associated navigation experience. In the performing phase, the mobile robot navigates autonomously in similar environments, while allowing the presence of scene perturbations such as the presence of passersby.
Collapse
Affiliation(s)
- Juyang Weng
- Department of Computer Science, Michigan State University, East Lansing, USA
| | | |
Collapse
|
45
|
Taylor C, Kriegman D. Vision-based motion planning and exploration algorithms for mobile robots. ACTA ACUST UNITED AC 1998. [DOI: 10.1109/70.678451] [Citation(s) in RCA: 56] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
46
|
Siadat A, Kaske A, Klausmann S, Dufaut M, Husson R. An Optimized Segmentation Method for a 2D Laser-Scanner Applied to Mobile Robot Navigation. ACTA ACUST UNITED AC 1997. [DOI: 10.1016/s1474-6670(17)43255-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
47
|
|
48
|
Tistarelli M, Sandini G. Obstacle Detection from a Moving Vehicle. ADVANCES IN HUMAN-COMPUTER INTERACTION 1995. [DOI: 10.1007/978-3-642-85220-6_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
49
|
|
50
|
Braunegg D. MARVEL: a system that recognizes world locations with stereo vision. ACTA ACUST UNITED AC 1993. [DOI: 10.1109/70.240199] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|