1
|
Ukida H. Indoor Unmanned Aerial Vehicle Navigation System Using LED Panels and QR Codes. JOURNAL OF ROBOTICS AND MECHATRONICS 2021. [DOI: 10.20965/jrm.2021.p0242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this study, we propose an unmanned aerial vehicle (UAV) navigation system using LED panels and QR codes as markers in an indoor environment. An LED panel can display various patterns; hence, we use it as a command presentation device for UAVs, and a QR code can embed various pieces of information, which is used as a sign to estimate the location of the UAV on the way of the flight path. In this paper, we present a navigation method from departure to destination positions in which an obstacle lies between them. In addition, we investigate the effectiveness of our proposed method using an actual UAV.
Collapse
|
2
|
Tsubouchi T. Introduction to Simultaneous Localization and Mapping. JOURNAL OF ROBOTICS AND MECHATRONICS 2019. [DOI: 10.20965/jrm.2019.p0367] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Simultaneous localization and mapping (SLAM) forms the core of the technology that supports mobile robots. With SLAM, when a robot is moving in an actual environment, real world information is imported to a computer on the robot via a sensor, and robot’s physical location and a map of its surrounding environment of the robot are created. SLAM is a major topic in mobile robot research. Although the information, supported by a mathematical description, is derived from a space in reality, it is formulated based on a probability theory when being handled. Therefore, this concept contributes not only to the research and development concerning mobile robots, but also to the training of mathematics and computer implementation, aimed mainly at position estimation and map creation for the mobile robots. This article focuses on the SLAM technology, including a brief overview of its history, insights from the author, and, finally, introduction of a specific example that the author was involved.
Collapse
|
3
|
Shibata A, Okumura Y, Fujii H, Yamashita A, Asama H. Refraction-Based Bundle Adjustment for Scale Reconstructible Structure from Motion. JOURNAL OF ROBOTICS AND MECHATRONICS 2018. [DOI: 10.20965/jrm.2018.p0660] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Structure from motion is a three-dimensional (3D) reconstruction method that uses one camera. However, the absolute scale of objects cannot be reconstructed by the conventional structure from motion method. In our previous studies, to solve this problem by using refraction, we proposed a scale reconstructible structure from motion method. In our measurement system, a refractive plate is fixed in front of a camera and images are captured through this plate. To overcome the geometrical constraints, we derived an extended essential equation by theoretically considering the effect of refraction. By applying this formula to 3D measurements, the absolute scale of an object could be obtained. However, this method was verified only by a simulation under ideal conditions, for example, by not taking into account real phenomena such as noise or occlusion, which are necessarily caused in actual measurements. In this study, to robustly apply this method to an actual measurement with real images, we introduced a novel bundle adjustment method based on the refraction effect. This optimization technique can reduce the 3D reconstruction errors caused by measurement noise in actual scenes. In particular, we propose a new error function considering the effect of refraction. By minimizing the value of this error function, accurate 3D reconstruction results can be obtained. To evaluate the effectiveness of the proposed method, experiments using both simulations and real images were conducted. The results of the simulation show that the proposed method is theoretically accurate. The results of the experiments using real images show that the proposed method is effective for real 3D measurements.
Collapse
|
4
|
Turgeman A, Shoval S, Degani A. Sensor Data Fusion of a Redundant Dual-Platform Robot for Elevation Mapping. JOURNAL OF ROBOTICS AND MECHATRONICS 2018. [DOI: 10.20965/jrm.2018.p0106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a novel methodology for localization and terrain mapping along a defined course such as narrow tunnels and pipes, using a redundant unmanned ground vehicle kinematic design. The vehicle is designed to work in unknown environments without the use of external sensors. The design consists of two platforms, connected by a passive, semi-rigid three-bar mechanism. Each platform includes separate sets of local sensors and a controller. In addition, a central controller logs the data and synchronizes the platforms’ motion. According to the dynamic patterns of the redundant information, a fusion algorithm, based on acentralized Kalman filter, receives data from the different sets of inputs (mapping techniques), and produces an elevation map along the traversed route in thex-zsagittal plane. The method is tested in various scenarios using simulated and real-world setups. The experimental results show high degree of accuracy on different terrains. The proposed system is suitable for mapping terrains in confined spaces such as underground tunnels and wrecks where standard mapping devices such as GPS, laser scanners and cameras are not applicable.
Collapse
|
5
|
Takeishi N, Yairi T. Visual Monocular Localization, Mapping, and Motion Estimation of a Rotating Small Celestial Body. JOURNAL OF ROBOTICS AND MECHATRONICS 2017. [DOI: 10.20965/jrm.2017.p0856] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In the exploration of a small celestial body, it is important to estimate the position and attitude of the spacecraft, as well as the geometric properties of the target celestial body. In this paper, we propose a method to concurrently estimate these quantities in a highly automatic manner when measurements from an attitude sensor, inertial sensors, and a monocular camera are given. The proposed method is based on the incremental optimization technique, which works with models for sensor fusion, and a tailored initialization scheme developed to compensate for the absence of range sensors. Moreover, we discuss the challenges in developing a fully automatic navigation framework.**This paper is an extended version of a preliminary conference report [1].
Collapse
|
6
|
Takubo T, Takaishi H, Ueno A. Automating the Appending of Image Information to Grid Map Corresponding to Object Shape. JOURNAL OF ROBOTICS AND MECHATRONICS 2017. [DOI: 10.20965/jrm.2017.p0713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A technique for automating the Image-Information-Added Map, a mapping method for photographing an object at a required resolution, is proposed. The picture shooting vector indicating the angle for taking a picture with sufficient resolution is defined according to the shape of the object surface, and the operator controls a robot remotely to acquire pictures by checking the picture shooting vector in our previous study. For an automated inspection system, image acquisition should be automated. Assuming a 2-D grid map is prepared, first, the shooting vectors are set on the surface of the object in the map, and the picture shooting areas are defined. In order to reduce the number of the points that the mobile robot moves to to take pictures, an overlapping picture shooting area should be selected. As the selection of the points where pictures are taken is a set covering problem, the ant colony optimization method is used to solve it. Edge Exchange Crossover (EXX) is used to select picture taking points that are connected for efficient checking. The proposed method is implemented in a robot and evaluated according to the resolution of the collected images in an experimental environment.
Collapse
|
7
|
Chai Z, Matsumaru T. ORB-SHOT SLAM: Trajectory Correction by 3D Loop Closing Based on Bag-of-Visual-Words (BoVW) Model for RGB-D Visual SLAM. JOURNAL OF ROBOTICS AND MECHATRONICS 2017. [DOI: 10.20965/jrm.2017.p0365] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
[abstFig src='/00290002/10.jpg' width='300' text='Visual odometry + trajectory correction' ] This paper proposes the ORB-SHOT SLAM or OS-SLAM, which is a novel method of 3D loop closing for trajectory correction of RGB-D visual SLAM. We obtain point clouds from RGB-D sensors such as Kinect or Xtion, and we use 3D SHOT descriptors to describe the ORB corners. Then, we train an offline 3D vocabulary that contains more than 600,000 words by using two million 3D descriptors based on a large number of images from a public dataset provided by TUM. We convert new images to bag-of-visual-words (BoVW) vectors and push these vectors into an incremental database. We query the database for new images to detect the corresponding 3D loop candidates, and compute similarity scores between the new image and each corresponding 3D loop candidate. After detecting 2D loop closures using ORB-SLAM2 system, we accept those loop closures that are also included in the 3D loop candidates, and we assign them corresponding weights according to the scores stored previously. In the final graph-based optimization, we create edges with different weights for loop closures and correct the trajectory by solving a nonlinear least-squares optimization problem. We compare our results with several state-of-the-art systems such as ORB-SLAM2 and RGB-D SLAM by using the TUM public RGB-D dataset. We find that accurate loop closures and suitable weights reduce the error on trajectory estimation more effectively than other systems. The performance of ORB-SHOT SLAM is demonstrated by 3D reconstruction application.
Collapse
|
8
|
Hagiwara H, Touma Y, Asami K, Komori M. FPGA-Based Stereo Vision System Using Gradient Feature Correspondence. JOURNAL OF ROBOTICS AND MECHATRONICS 2015. [DOI: 10.20965/jrm.2015.p0681] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/10.jpg"" width=""300"" /> Mobile robot with a stereo vision</div>This paper describes an autonomous mobile robot stereo vision system that uses gradient feature correspondence and local image feature computation on a field programmable gate array (FPGA). Among several studies on interest point detectors and descriptors for having a mobile robot navigate are the Harris operator and scale-invariant feature transform (SIFT). Most of these require heavy computation, however, and using them may burden some computers. Our purpose here is to present an interest point detector and a descriptor suitable for FPGA implementation. Results show that a detector using gradient variance inspection performs faster than SIFT or speeded-up robust features (SURF), and is more robust against illumination changes than any other method compared in this study. A descriptor with a hierarchical gradient structure has a simpler algorithm than SIFT and SURF descriptors, and the result of stereo matching achieves better performance than SIFT or SURF.
Collapse
|
9
|
Daud MR, Nonami K. AutonomousWalking over Obstacles by Means of LRF for Hexapod Robot COMET-IV. JOURNAL OF ROBOTICS AND MECHATRONICS 2012. [DOI: 10.20965/jrm.2012.p0055] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents an autonomous navigation system for a hydraulically driven hexapod robot (COMETIV) based on point cloud data acquired using a rotating Laser Range Finder (LRF). The size of the robot would prohibit its movement in a stochastic terrain environment if we only consider letting it avoid obstacles. However, the robot has a unique ability to walk over obstacles. We thus proposed the so-called Grid-based Walking Trajectory for Legged Robot (GWTLR) method. The method is developed on the basis of the geometric representation of a stochastic terrain in terms of grid cell characteristics. We also introduced the “Grid-cell model for COMET-IV” to assess the characteristics of the grid cells and to determine whether each of the cells is traversable or not. Finally, the shortest safe walking trajectory is generated using a search algorithm, A*. The performance of the proposed method is verified by the experimental results of the successful determination of a walking trajectory path and by completely walking over obstacles in various arrangements.
Collapse
|