1
|
Hsieh TL, Jhan ZS, Yeh NJ, Chen CY, Chuang CT. An Unmanned Aerial Vehicle Indoor Low-Computation Navigation Method Based on Vision and Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 24:190. [PMID: 38203052 PMCID: PMC10781313 DOI: 10.3390/s24010190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 12/14/2023] [Accepted: 12/19/2023] [Indexed: 01/12/2024]
Abstract
Recently, unmanned aerial vehicles (UAVs) have found extensive indoor applications. In numerous indoor UAV scenarios, navigation paths remain consistent. While many indoor positioning methods offer excellent precision, they often demand significant costs and computational resources. Furthermore, such high functionality can be superfluous for these applications. To address this issue, we present a cost-effective, computationally efficient solution for path following and obstacle avoidance. The UAV employs a down-looking camera for path following and a front-looking camera for obstacle avoidance. This paper refines the carrot casing algorithm for line tracking and introduces our novel line-fitting path-following algorithm (LFPF). Both algorithms competently manage indoor path-following tasks within a constrained field of view. However, the LFPF is superior at adapting to light variations and maintaining a consistent flight speed, maintaining its error margin within ±40 cm in real flight scenarios. For obstacle avoidance, we utilize depth images and YOLOv4-tiny to detect obstacles, subsequently implementing suitable avoidance strategies based on the type and proximity of these obstacles. Real-world tests indicated minimal computational demands, enabling the Nvidia Jetson Nano, an entry-level computing platform, to operate at 23 FPS.
Collapse
Affiliation(s)
| | | | | | | | - Cheng-Ta Chuang
- Department of Intelligent Automation Engineering, National Taipei University of Technology, Taipei 10608, Taiwan
| |
Collapse
|
2
|
Lyu Y, Nguyen T, Liu L, Cao M, Yuan S, Nguyen TH, Xie L. SPINS: A structure priors aided inertial navigation system. J FIELD ROBOT 2023. [DOI: 10.1002/rob.22161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Affiliation(s)
- Yang Lyu
- School of Automation Northwestern Polytechnical University Xi'an China
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Thien‐Minh Nguyen
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Liu Liu
- College of Engineering and Computer Science Australian National University Australian Capital Territory Canberra Australia
| | - Muqing Cao
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Shenghai Yuan
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Thien Hoang Nguyen
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Lihua Xie
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| |
Collapse
|
3
|
Exploiting Graph and Geodesic Distance Constraint for Deep Learning-Based Visual Odometry. REMOTE SENSING 2022. [DOI: 10.3390/rs14081854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Visual odometry is the task of estimating the trajectory of the moving agents from consecutive images. It is a hot research topic both in robotic and computer vision communities and facilitates many applications, such as autonomous driving and virtual reality. The conventional odometry methods predict the trajectory by utilizing the multiple view geometry between consecutive overlapping images. However, these methods need to be carefully designed and fine-tuned to work well in different environments. Deep learning has been explored to alleviate the challenge by directly predicting the relative pose from the paired images. Deep learning-based methods usually focus on the consecutive images that are feasible to propagate the error over time. In this paper, graph loss and geodesic rotation loss are proposed to enhance deep learning-based visual odometry methods based on graph constraints and geodesic distance, respectively. The graph loss not only considers the relative pose loss of consecutive images, but also the relative pose of non-consecutive images. The relative pose of non-consecutive images is not directly predicted but computed from the relative pose of consecutive ones. The geodesic rotation loss is constructed by the geodesic distance and the model regresses a Lie algebra so(3) (3D vector). This allows a robust and stable convergence. To increase the efficiency, a random strategy is adopted to select the edges of the graph instead of using all of the edges. This strategy provides additional regularization for training the networks. Extensive experiments are conducted on visual odometry benchmarks, and the obtained results demonstrate that the proposed method has comparable performance to other supervised learning-based methods, as well as monocular camera-based methods. The source code and the weight are made publicly available.
Collapse
|
4
|
|
5
|
Design and Analysis of a 35 GHz Rectenna System for Wireless Power Transfer to an Unmanned Air Vehicle. ENERGIES 2022. [DOI: 10.3390/en15010320] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this article, the concept of a 22-kW microwave-powered unmanned aerial vehicle is presented. Its system architecture is analyzed and modeled for wirelessly transferring microwave power to the flying UAVs. The microwave system transmitting power at a 35 GHz frequency was found to be suitable for low-cost and compact architectures. The size of the transmitting and receiving systems are optimized to 108 m2 and 90 m2, respectively. A linearly polarized 4 × 2 rectangular microstrip patch antenna array has been designed and simulated to obtain a high gain, high directivity, and high efficiency in order to satisfy the power transfer requirements. The numerically simulated gain, directivity, and efficiency of the proposed patch antenna array are 13.4 dBi, 14 dBi, and 85%, respectively. Finally, a rectifying system (rectenna) is optimized using the Agilent advanced design system (ADS) software as a microwave power receiving system. The proposed rectenna at the core of the system has an efficiency profile of more than 80% for an RF input power range of 9 to 18 dBm. Moreover, the RF-to-DC conversion efficiency and DC output voltage of the proposed rectenna are 80% and 3.5 V, respectively, for a 10 dBm input power at 35 GHz with a load of 1500 Ω.
Collapse
|
6
|
Gao L, Battistelli G, Chisci L. PHD-SLAM 2.0: Efficient SLAM in the Presence of Missdetections and Clutter. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3052078] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
|
8
|
Young SN, Lanciloti RJ, Peschel JM. The Effects of Interface Views on Performing Aerial Telemanipulation Tasks Using Small UAVs. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00783-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
9
|
Liu C, Zhao J, Sun N, Yang Q, Wang L. IT-SVO: Improved Semi-Direct Monocular Visual Odometry Combined with JS Divergence in Restricted Mobile Devices. SENSORS 2021; 21:s21062025. [PMID: 33809347 PMCID: PMC7998773 DOI: 10.3390/s21062025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 03/06/2021] [Accepted: 03/10/2021] [Indexed: 11/16/2022]
Abstract
Simultaneous localization and mapping (SLAM) has a wide range for applications in mobile robotics. Lightweight and inexpensive vision sensors have been widely used for localization in GPS-denied or weak GPS environments. Mobile robots not only estimate their pose, but also correct their position according to the environment, so a proper mathematical model is required to obtain the state of robots in their circumstances. Usually, filter-based SLAM/VO regards the model as a Gaussian distribution in the mapping thread, which deals with the complicated relationship between mean and covariance. The covariance in SLAM or VO represents the uncertainty of map points. Therefore, the methods, such as probability theory and information theory play a significant role in estimating the uncertainty. In this paper, we combine information theory with classical visual odometry (SVO) and take Jensen-Shannon divergence (JS divergence) instead of Kullback-Leibler divergence (KL divergence) to estimate the uncertainty of depth. A more suitable methodology for SVO is that explores to improve the accuracy and robustness of mobile devices in unknown environments. Meanwhile, this paper aims to efficiently utilize small portability for location and provide a priori knowledge of the latter application scenario. Therefore, combined with SVO, JS divergence is implemented, which has been realized. It not only has the property of accurate distinction of outliers, but also converges the inliers quickly. Simultaneously, the results show, under the same computational simulation, that SVO combined with JS divergence can more accurately locate its state in the environment than the combination with KL divergence.
Collapse
Affiliation(s)
- Chang Liu
- School of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (C.L.); (N.S.); (Q.Y.); (L.W.)
| | - Jin Zhao
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang 550025, China
- Correspondence: ; Tel.: +86-51-8362-3815
| | - Nianyi Sun
- School of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (C.L.); (N.S.); (Q.Y.); (L.W.)
| | - Qingrong Yang
- School of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (C.L.); (N.S.); (Q.Y.); (L.W.)
| | - Leilei Wang
- School of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (C.L.); (N.S.); (Q.Y.); (L.W.)
| |
Collapse
|
10
|
Sareh P, Chermprayong P, Emmanuelli M, Nadeem H, Kovac M. Rotorigami: A rotary origami protective system for robotic rotorcraft. Sci Robot 2021; 3:3/22/eaah5228. [PMID: 33141756 DOI: 10.1126/scirobotics.aah5228] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 09/05/2018] [Indexed: 12/25/2022]
Abstract
Applications of aerial robots are progressively expanding into complex urban and natural environments. Despite remarkable advancements in the field, robotic rotorcraft is still drastically limited by the environment in which they operate. Obstacle detection and avoidance systems have functionality limitations and substantially add to the computational complexity of the onboard equipment of flying vehicles. Furthermore, they often cannot identify difficult-to-detect obstacles such as windows and wires. Robustness to physical contact with the environment is essential to mitigate these limitations and continue mission completion. However, many current mechanical impact protection concepts are either not sufficiently effective or too heavy and cumbersome, severely limiting the flight time and the capability of flying in constrained and narrow spaces. Therefore, novel impact protection systems are needed to enable flying robots to navigate in confined or heavily cluttered environments easily, safely, and efficiently while minimizing the performance penalty caused by the protection method. Here, we report the development of a protection system for robotic rotorcraft consisting of a free-to-spin circular protector that is able to decouple impact yawing moments from the vehicle, combined with a cyclic origami impact cushion capable of reducing the peak impact force experienced by the vehicle. Experimental results using a sensor-equipped miniature quadrotor demonstrated the impact resilience effectiveness of the Rotary Origami Protective System (Rotorigami) for a variety of collision scenarios. We anticipate this work to be a starting point for the exploitation of origami structures in the passive or active impact protection of robotic vehicles.
Collapse
Affiliation(s)
- Pooya Sareh
- Aerial Robotics Laboratory, Department of Aeronautics, Imperial College London, South Kensington Campus, SW7 2AZ London, UK. .,Division of Industrial Design, School of Engineering, University of Liverpool, London Campus, EC2A 1AG London, UK
| | - Pisak Chermprayong
- Aerial Robotics Laboratory, Department of Aeronautics, Imperial College London, South Kensington Campus, SW7 2AZ London, UK
| | - Marc Emmanuelli
- Aerial Robotics Laboratory, Department of Aeronautics, Imperial College London, South Kensington Campus, SW7 2AZ London, UK
| | - Haris Nadeem
- Aerial Robotics Laboratory, Department of Aeronautics, Imperial College London, South Kensington Campus, SW7 2AZ London, UK
| | - Mirko Kovac
- Aerial Robotics Laboratory, Department of Aeronautics, Imperial College London, South Kensington Campus, SW7 2AZ London, UK
| |
Collapse
|
11
|
Abstract
Object localization is an important task in the visual surveillance of scenes, and it has important applications in locating personnel and/or equipment in large open spaces such as a farm or a mine. Traditionally, object localization can be performed using the technique of stereo vision: using two fixed cameras for a moving object, or using a single moving camera for a stationary object. This research addresses the problem of determining the location of a moving object using only a single moving camera, and it does not make use of any prior information on the type of object nor the size of the object. Our technique makes use of a single camera mounted on a quadrotor drone, which flies in a specific pattern relative to the object in order to remove the depth ambiguity associated with their relative motion. In our previous work, we showed that with three images, we can recover the location of an object moving parallel to the direction of motion of the camera. In this research, we find that with four images, we can recover the location of an object moving linearly in an arbitrary direction. We evaluated our algorithm on over 70 image sequences of objects moving in various directions, and the results showed a much smaller depth error rate (less than 8.0% typically) than other state-of-the-art algorithms.
Collapse
|
12
|
Shamwell EJ, Lindgren K, Leung S, Nothwang WD. Unsupervised Deep Visual-Inertial Odometry with Online Error Correction for RGB-D Imagery. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:2478-2493. [PMID: 30990417 DOI: 10.1109/tpami.2019.2909895] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
While numerous deep approaches to the problem of vision-aided localization have been recently proposed, systems operating in the real world will undoubtedly experience novel sensory states previously unseen even under the most prodigious training regimens. We address the localization problem with online error correction (OEC) modules that are trained to correct a vision-aided localization network's mistakes. We demonstrate the generalizability of the OEC modules and describe our unsupervised deep neural network approach to the fusion of RGB-D imagery with inertial measurements for absolute trajectory estimation. Our network, dubbed the Visual-Inertial-Odometry Learner (VIOLearner), learns to perform visual-inertial odometry (VIO) without inertial measurement unit (IMU) intrinsic parameters or the extrinsic calibration between an IMU and camera. The network learns to integrate IMU measurements and generate hypothesis trajectories which are then corrected online according to the Jacobians of scaled image projection errors with respect to spatial grids of pixel coordinates. We evaluate our network against state-of-the-art (SoA) VIO, visual odometry (VO), and visual simultaneous localization and mapping (VSLAM) approaches on the KITTI Odometry dataset as well as a micro aerial vehicle (MAV) dataset that we collected in the AirSim simulation environment. We demonstrate better than SoA translational localization performance against comparable SoA approaches on our evaluation sequences.
Collapse
|
13
|
VIMO: A Visual-Inertial-Magnetic Navigation System Based on Non-Linear Optimization. SENSORS 2020; 20:s20164386. [PMID: 32781582 PMCID: PMC7472289 DOI: 10.3390/s20164386] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 08/03/2020] [Accepted: 08/04/2020] [Indexed: 11/17/2022]
Abstract
Visual-inertial navigation systems are credited with superiority over both pure visual approaches and filtering ones. In spite of the high precision many state-of-the-art schemes have attained, yaw remains unobservable in those systems all the same. More accurate yaw estimation not only means more accurate attitude calculation but also leads to better position estimation. This paper presents a novel scheme that combines visual and inertial measurements as well as magnetic information for suppressing deviation in yaw. A novel method for initializing visual-inertial-magnetic odometers, which recovers the directions of magnetic north and gravity, the visual scalar factor, inertial measurement unit (IMU) biases etc., has been conceived, implemented, and validated. Based on non-linear optimization, a magnetometer cost function is incorporated into the overall optimization objective function as a yawing constraint among others. We have done extensive research and collected several datasets recorded in large-scale outdoor environments to certify the proposed system’s viability, robustness, and performance. Cogent experiments and quantitative comparisons corroborate the merits of the proposed scheme and the desired effect of the involvement of magnetic information on the overall performance.
Collapse
|
14
|
Huang W, Liu H, Wan W. An Online Initialization and Self-Calibration Method for Stereo Visual-Inertial Odometry. IEEE T ROBOT 2020. [DOI: 10.1109/tro.2019.2959161] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
Monocular Visual SLAM Based on a Cooperative UAV-Target System. SENSORS 2020; 20:s20123531. [PMID: 32580347 PMCID: PMC7378774 DOI: 10.3390/s20123531] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 06/13/2020] [Accepted: 06/18/2020] [Indexed: 11/17/2022]
Abstract
To obtain autonomy in applications that involve Unmanned Aerial Vehicles (UAVs), the capacity of self-location and perception of the operational environment is a fundamental requirement. To this effect, GPS represents the typical solution for determining the position of a UAV operating in outdoor and open environments. On the other hand, GPS cannot be a reliable solution for a different kind of environments like cluttered and indoor ones. In this scenario, a good alternative is represented by the monocular SLAM (Simultaneous Localization and Mapping) methods. A monocular SLAM system allows a UAV to operate in a priori unknown environment using an onboard camera to simultaneously build a map of its surroundings while at the same time locates itself respect to this map. So, given the problem of an aerial robot that must follow a free-moving cooperative target in a GPS denied environment, this work presents a monocular-based SLAM approach for cooperative UAV-Target systems that addresses the state estimation problem of (i) the UAV position and velocity, (ii) the target position and velocity, (iii) the landmarks positions (map). The proposed monocular SLAM system incorporates altitude measurements obtained from an altimeter. In this case, an observability analysis is carried out to show that the observability properties of the system are improved by incorporating altitude measurements. Furthermore, a novel technique to estimate the approximate depth of the new visual landmarks is proposed, which takes advantage of the cooperative target. Additionally, a control system is proposed for maintaining a stable flight formation of the UAV with respect to the target. In this case, the stability of control laws is proved using the Lyapunov theory. The experimental results obtained from real data as well as the results obtained from computer simulations show that the proposed scheme can provide good performance.
Collapse
|
16
|
Lee D, Yu HW, Kim S, Yoon J, Lee K, Chai YJ, Choi JY, Kong HJ, Lee KE, Cho HS, Kim HC. Vision-based tracking system for augmented reality to localize recurrent laryngeal nerve during robotic thyroid surgery. Sci Rep 2020; 10:8437. [PMID: 32439970 PMCID: PMC7242458 DOI: 10.1038/s41598-020-65439-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 04/28/2020] [Indexed: 02/06/2023] Open
Abstract
We adopted a vision-based tracking system for augmented reality (AR), and evaluated whether it helped surgeons to localize the recurrent laryngeal nerve (RLN) during robotic thyroid surgery. We constructed an AR image of the trachea, common carotid artery, and RLN using CT images. During surgery, an AR image of the trachea and common carotid artery were overlaid on the physical structures after they were exposed. The vision-based tracking system was activated so that the AR image of the RLN followed the camera movement. After identifying the RLN, the distance between the AR image of the RLN and the actual RLN was measured. Eleven RLNs (9 right, 4 left) were tested. The mean distance between the RLN AR image and the actual RLN was 1.9 ± 1.5 mm (range 0.5 to 3.7). RLN localization using AR and vision-based tracking system was successfully applied during robotic thyroidectomy. There were no cases of RLN palsy. This technique may allow surgeons to identify hidden anatomical structures during robotic surgery.
Collapse
Affiliation(s)
- Dongheon Lee
- Interdisciplinary Program, Bioengineering Major, Graduate School, Seoul National University, Seoul, Korea
| | - Hyeong Won Yu
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam-si, South Korea
| | | | - Jin Yoon
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam-si, South Korea
| | - Keunchul Lee
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam-si, South Korea
| | - Young Jun Chai
- Department of Surgery, Seoul National University Boramae Medical Center, Seoul, South Korea.
| | - June Young Choi
- Department of Surgery, Seoul National University Bundang Hospital, Seongnam-si, South Korea.
| | - Hyoun-Joong Kong
- Department of Biomedical Engineering, Chungnam National University College of Medicine, Daejeon, Korea
| | - Kyu Eun Lee
- Department of Surgery, Seoul National University Hospital and College of Medicine, Seoul, South Korea
| | - Hwan Seong Cho
- Department of Orthopaedic Surgery, Seoul National University Bundang Hospital, Seongnam-si, South Korea
| | - Hee Chan Kim
- Department of Biomedical Engineering, College of Medicine and Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, Korea
| |
Collapse
|
17
|
Kong FH, Zhao J, Zhao L, Huang S. Analysis of Minima for Geodesic and Chordal Cost for a Minimal 2-D Pose-Graph SLAM Problem. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2019.2958492] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
18
|
Martz J, Al‐Sabban W, Smith RN. Survey of unmanned subterranean exploration, navigation, and localisation. IET CYBER-SYSTEMS AND ROBOTICS 2020. [DOI: 10.1049/iet-csr.2019.0043] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Jeffrey Martz
- Physics and Engineering Fort Lewis College Durango CO USA
| | - Wesam Al‐Sabban
- Computer and Information Systems umm Al Qura University Makkah Saudi Arabia
| | - Ryan N. Smith
- Physics and Engineering Fort Lewis College Durango CO USA
| |
Collapse
|
19
|
Isop WA, Gebhardt C, Nägeli T, Fraundorfer F, Hilliges O, Schmalstieg D. High-Level Teleoperation System for Aerial Exploration of Indoor Environments. Front Robot AI 2019; 6:95. [PMID: 33501110 PMCID: PMC7805862 DOI: 10.3389/frobt.2019.00095] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 09/17/2019] [Indexed: 11/25/2022] Open
Abstract
Exploration of challenging indoor environments is a demanding task. While automation with aerial robots seems a promising solution, fully autonomous systems still struggle with high-level cognitive tasks and intuitive decision making. To facilitate automation, we introduce a novel teleoperation system with an aerial telerobot that is capable of handling all demanding low-level tasks. Motivated by the typical structure of indoor environments, the system creates an interactive scene topology in real-time that reduces scene details and supports affordances. Thus, difficult high-level tasks can be effectively supervised by a human operator. To elaborate on the effectiveness of our system during a real-world exploration mission, we conducted a user study. Despite being limited by real-world constraints, results indicate that our system better supports operators with indoor exploration, compared to a baseline system with traditional joystick control.
Collapse
Affiliation(s)
- Werner Alexander Isop
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | | | - Tobias Nägeli
- Advanced Interactive Technologies Lab, ETH Zürich, Zurich, Switzerland
| | - Friedrich Fraundorfer
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Otmar Hilliges
- Advanced Interactive Technologies Lab, ETH Zürich, Zurich, Switzerland
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria.,VRVis Research Center, Vienna, Austria
| |
Collapse
|
20
|
Three-Dimensional Reconstruction Based on Visual SLAM of Mobile Robot in Search and Rescue Disaster Scenarios. ROBOTICA 2019. [DOI: 10.1017/s0263574719000675] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
SummaryConventional simultaneous localization and mapping (SLAM) has concentrated on two-dimensional (2D) map building. To adapt it to urgent search and rescue (SAR) environments, it is necessary to combine the fast and simple global 2D SLAM and three-dimensional (3D) objects of interest (OOIs) local sub-maps. The main novelty of the present work is a method for 3D OOI reconstruction based on a 2D map, thereby retaining the fast performances of the latter. A theory is established that is adapted to a SAR environment, including the object identification, exploration area coverage (AC), and loop closure detection of revisited spots. Proposed for the first is image optical flow calculation with a 2D/3D fusion method and RGB-D (red, green, blue + depth) transformation based on Joblove–Greenberg mathematics and OpenCV processing. The mathematical theories of optical flow calculation and wavelet transformation are used for the first time to solve the robotic SAR SLAM problem. The present contributions indicate two aspects: (i) mobile robots depend on planar distance estimation to build 2D maps quickly and to provide SAR exploration AC; (ii) 3D OOIs are reconstructed using the proposed innovative methods of RGB-D iterative closest points (RGB-ICPs) and 2D/3D principle of wavelet transformation. Different mobile robots are used to conduct indoor and outdoor SAR SLAM. Both the SLAM and the SAR OOIs detection are implemented by simulations and ground-truth experiments, which provide strong evidence for the proposed 2D/3D reconstruction SAR SLAM approaches adapted to post-disaster environments.
Collapse
|
21
|
Dupeyroux J, Serres JR, Viollet S. AntBot: A six-legged walking robot able to home like desert ants in outdoor environments. Sci Robot 2019; 4:4/27/eaau0307. [DOI: 10.1126/scirobotics.aau0307] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 01/15/2019] [Indexed: 12/28/2022]
|
22
|
Pandya H, Gaud A, Kumar G, Krishna KM. Instance invariant visual servoing framework for part-aware autonomous vehicle inspection using MAVs. J FIELD ROBOT 2019. [DOI: 10.1002/rob.21859] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
| | - Ayush Gaud
- IIIT-Hyderabad; Hyderabad Telangana India
| | | | | |
Collapse
|
23
|
Baca T, Stepan P, Spurny V, Hert D, Penicka R, Saska M, Thomas J, Loianno G, Kumar V. Autonomous landing on a moving vehicle with an unmanned aerial vehicle. J FIELD ROBOT 2019. [DOI: 10.1002/rob.21858] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Tomas Baca
- Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University in Prague Prague Czech Republic
| | - Petr Stepan
- Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University in Prague Prague Czech Republic
| | - Vojtech Spurny
- Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University in Prague Prague Czech Republic
| | - Daniel Hert
- Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University in Prague Prague Czech Republic
| | - Robert Penicka
- Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University in Prague Prague Czech Republic
| | - Martin Saska
- Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University in Prague Prague Czech Republic
| | - Justin Thomas
- GRASP Laboratory University of Pennsylvania Philadelphia Pennsylvania
| | - Giuseppe Loianno
- Department of ECE and MAE Tandon School of Engineering, New York University New York City, New York
| | - Vijay Kumar
- GRASP Laboratory University of Pennsylvania Philadelphia Pennsylvania
| |
Collapse
|
24
|
Feldman D, Danial Jeryes J, Hutterer A. Position Estimation of Moving Objects: Practical Provable Approximation. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2899430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
25
|
Warren M, Greeff M, Patel B, Collier J, Schoellig AP, Barfoot TD. There's No Place Like Home: Visual Teach and Repeat for Emergency Return of Multirotor UAVs During GPS Failure. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2018.2883408] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
26
|
Visual-Based SLAM Configurations for Cooperative Multi-UAV Systems with a Lead Agent: An Observability-Based Approach. SENSORS 2018; 18:s18124243. [PMID: 30513949 PMCID: PMC6308766 DOI: 10.3390/s18124243] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2018] [Revised: 11/28/2018] [Accepted: 11/28/2018] [Indexed: 11/17/2022]
Abstract
In this work, the problem of the cooperative visual-based SLAM for the class of multi-UA systems that integrates a lead agent has been addressed. In these kinds of systems, a team of aerial robots flying in formation must follow a dynamic lead agent, which can be another aerial robot, vehicle or even a human. A fundamental problem that must be addressed for these kinds of systems has to do with the estimation of the states of the aerial robots as well as the state of the lead agent. In this work, the use of a cooperative visual-based SLAM approach is studied in order to solve the above problem. In this case, three different system configurations are proposed and investigated by means of an intensive nonlinear observability analysis. In addition, a high-level control scheme is proposed that allows to control the formation of the UAVs with respect to the lead agent. In this work, several theoretical results are obtained, together with an extensive set of computer simulations which are presented in order to numerically validate the proposal and to show that it can perform well under different circumstances (e.g., GPS-challenging environments). That is, the proposed method is able to operate robustly under many conditions providing a good position estimation of the aerial vehicles and the lead agent as well.
Collapse
|
27
|
Towards the Internet of Flying Robots: A Survey. SENSORS 2018; 18:s18114038. [PMID: 30463270 PMCID: PMC6263391 DOI: 10.3390/s18114038] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 11/15/2018] [Accepted: 11/17/2018] [Indexed: 11/17/2022]
Abstract
The Internet of Flying Robots (IoFR) has received much attention in recent years thanks to the mobility and flexibility of flying robots. Although a lot of research has been done, there is a lack of a comprehensive survey on this topic. This paper analyzes several typical problems in designing IoFR for real applications, including wireless communication support, monitoring targets of interest, serving a wireless sensor network, and collaborating with ground robots. In particular, an overview of the existing publications on the coverage problem, connectivity of flying robots, energy capacity limitation, target searching, path planning, flying robot navigation with collision avoidance, etc., is presented. Beyond the discussion of these available approaches, some shortcomings of them are indicated and some promising future research directions are pointed out.
Collapse
|
28
|
Spurný V, Báča T, Saska M, Pěnička R, Krajník T, Thomas J, Thakur D, Loianno G, Kumar V. Cooperative autonomous search, grasping, and delivering in a treasure hunt scenario by a team of unmanned aerial vehicles. J FIELD ROBOT 2018. [DOI: 10.1002/rob.21816] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Vojtěch Spurný
- Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University Prague Czech Republic
| | - Tomáš Báča
- Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University Prague Czech Republic
| | - Martin Saska
- Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University Prague Czech Republic
| | - Robert Pěnička
- Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University Prague Czech Republic
| | - Tomáš Krajník
- Department of Computer Science Faculty of Electrical Engineering, Czech Technical University Prague Czech Republic
| | - Justin Thomas
- GRASP Laboratory University of Pennsylvania Philadelphia Pennsylvania
| | - Dinesh Thakur
- GRASP Laboratory University of Pennsylvania Philadelphia Pennsylvania
| | - Giuseppe Loianno
- Department of ECE and MAE, Tandon School of Engineering, New York University New York City New York
| | - Vijay Kumar
- GRASP Laboratory University of Pennsylvania Philadelphia Pennsylvania
| |
Collapse
|
29
|
Abstract
SUMMARYSafe and accurate navigation for autonomous trajectory tracking of quadrotors using monocular vision is addressed in this paper. A second order Sliding Mode (2-SM) control algorithm is used to track desired trajectories, providing robustness against model uncertainties and external perturbations. The time-scale separation of the translational and rotational dynamics allows to design position controllers by giving a desired reference in roll and pitch angles, which is suitable for practical validation in quad-rotors equipped with an internal attitude controller. A Lyapunov based analysis proved the closed-loop stability of the system despite the presence of unknown external perturbations. Monocular vision fused with inertial measurements are used to estimate the vehicle's pose with respect to unstructured scenes. In addition, the distance to potential collisions is detected and computed using the sparse depth map coming also from the vision algorithm. The proposed strategy is successfully tested in real-time experiments, using a low-cost commercial quadrotor.
Collapse
|
30
|
Sutoh M, Iijima Y, Sakakieda Y, Wakabayashi S. Motion Modeling and Localization of Skid-Steering Wheeled Rover on Loose Terrain. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2861427] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
31
|
Ozaslan T, Loianno G, Keller J, Taylor CJ, Kumar V. Spatio-Temporally Smooth Local Mapping and State Estimation Inside Generalized Cylinders With Micro Aerial Vehicles. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2861888] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
32
|
Loianno G, Mulgaonkar Y, Brunner C, Ahuja D, Ramanandan A, Chari M, Diaz S, Kumar V. Autonomous flight and cooperative control for reconstruction using aerial robots powered by smartphones. Int J Rob Res 2018. [DOI: 10.1177/0278364918774136] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Advances in consumer electronics products and the technology seen in personal computers, digital cameras, and smartphones phones have led to the price/performance ratio of sensors and processors falling dramatically over the last decade. In particular, many consumer products are packaged with small cameras, gyroscopes, and accelerometers, all sensors that are needed for autonomous robots in GPS-denied environments. The low mass and small form factor make them particularly well suited for autonomous flight with small flying robots. In this work, we present the first fully autonomous smartphone-based system for quadrotors. We show how multiple quadrotors can be stabilized and controlled to achieve autonomous flight in indoor buildings with application to smart homes, search and rescue, monitoring construction projects, and developing models for architecture design. In our work, the computation for sensing and control runs on an off-the-shelf smartphone, with all the software functionality embedded in a smartphone app. No additional sensors or processors are required for autonomous flight. We are also able to use multiple, coordinated autonomous aerial vehicles to improve the efficiency of our mission. In our framework, multiple vehicles are able to plan safe trajectories avoiding inter-robot collisions, while concurrently building in a cooperative manner a three-dimensional map of the environment. The work allows any consumer with any number of robots equipped with smartphones to autonomously drive a team of quadrotor robots, even without GPS, by downloading our app and cooperatively build three-dimensional maps.
Collapse
Affiliation(s)
- Giuseppe Loianno
- New York University, Tandon School of Engineering, 6 MetroTech Center, 11201 Brooklyn NY, USA
| | - Yash Mulgaonkar
- GRASP Lab, University of Pennsylvania, Philadelphia, Walnut Street, 19103 Philadelphia, USA
| | - Chris Brunner
- Qualcomm Technologies, Inc., 5775 Morehouse Drive, San Diego, USA
| | - Dheeraj Ahuja
- Qualcomm Technologies, Inc., 5775 Morehouse Drive, San Diego, USA
| | | | - Murali Chari
- Qualcomm Technologies, Inc., 5775 Morehouse Drive, San Diego, USA
| | - Serafin Diaz
- Qualcomm Technologies, Inc., 5775 Morehouse Drive, San Diego, USA
| | - Vijay Kumar
- GRASP Lab, University of Pennsylvania, Philadelphia, Walnut Street, 19103 Philadelphia, USA
| |
Collapse
|
33
|
Affiliation(s)
- S. Suzuki
- Department of Mechanical Engineering and Robotics, Shinshu University, Ueda-shi, Nagano, Japan
| |
Collapse
|
34
|
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows. REMOTE SENSING 2018. [DOI: 10.3390/rs10071091] [Citation(s) in RCA: 275] [Impact Index Per Article: 45.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
35
|
Weinstein A, Cho A, Loianno G, Kumar V. Visual Inertial Odometry Swarm: An Autonomous Swarm of Vision-Based Quadrotors. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2800119] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
36
|
Loianno G, Spurny V, Thomas J, Baca T, Thakur D, Hert D, Penicka R, Krajnik T, Zhou A, Cho A, Saska M, Kumar V. Localization, Grasping, and Transportation of Magnetic Objects by a Team of MAVs in Challenging Desert-Like Environments. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2800121] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
37
|
Trujillo JC, Munguia R, Guerra E, Grau A. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments. SENSORS 2018; 18:s18051351. [PMID: 29701722 PMCID: PMC5981868 DOI: 10.3390/s18051351] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2018] [Revised: 04/03/2018] [Accepted: 04/19/2018] [Indexed: 11/16/2022]
Abstract
This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.
Collapse
Affiliation(s)
- Juan-Carlos Trujillo
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara 44430, Mexico.
| | - Rodrigo Munguia
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara 44430, Mexico.
| | - Edmundo Guerra
- Department of Automatic Control, Technical University of Catalonia UPC, 08034 Barcelona, Spain.
| | - Antoni Grau
- Department of Automatic Control, Technical University of Catalonia UPC, 08034 Barcelona, Spain.
| |
Collapse
|
38
|
Affiliation(s)
- B. Deniz Ilhan
- Electrical & Systems Engineering University of Pennsylvania Philadelphia Pennsylvania 19104
| | - Aaron M. Johnson
- Mechanical Engineering Carnegie Mellon University Pittsburgh Pennsylvania 15213
| | - D. E. Koditschek
- Electrical & Systems Engineering University of Pennsylvania Philadelphia Pennsylvania 19104
| |
Collapse
|
39
|
Monocular SLAM System for MAVs Aided with Altitude and Range Measurements: a GPS-free Approach. J INTELL ROBOT SYST 2018. [DOI: 10.1007/s10846-018-0775-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
40
|
|
41
|
Realtime Edge Based Visual Inertial Odometry for MAV Teleoperation in Indoor Environments. J INTELL ROBOT SYST 2017. [DOI: 10.1007/s10846-017-0670-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
42
|
Perez-Grau FJ, Ragel R, Caballero F, Viguria A, Ollero A. An architecture for robust UAV navigation in GPS-denied areas. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21757] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
| | - Ricardo Ragel
- Department of System Engineering and Automation; University of Seville; Sevilla Spain
| | - Fernando Caballero
- Department of System Engineering and Automation; University of Seville; Sevilla Spain
| | - Antidio Viguria
- CATEC; Center for Advanced Aerospace Technologies; Sevilla Spain
| | - Anibal Ollero
- Department of System Engineering and Automation; University of Seville; Sevilla Spain
| |
Collapse
|
43
|
Recchiuto CT, Sgorbissa A. Post-disaster assessment with unmanned aerial vehicles: A survey on practical implementations and research approaches. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21756] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
44
|
Perez-Grau FJ, Caballero F, Viguria A, Ollero A. Multi-sensor three-dimensional Monte Carlo localization for long-term aerial robot navigation. INT J ADV ROBOT SYST 2017. [DOI: 10.1177/1729881417732757] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
This article presents an enhanced version of the Monte Carlo localization algorithm, commonly used for robot navigation in indoor environments, which is suitable for aerial robots moving in a three-dimentional environment and makes use of a combination of measurements from an Red,Green,Blue-Depth (RGB-D) sensor, distances to several radio-tags placed in the environment, and an inertial measurement unit. The approach is demonstrated with an unmanned aerial vehicle flying for 10 min indoors and validated with a very precise motion tracking system. The approach has been implemented using the robot operating system framework and works smoothly on a regular i7 computer, leaving plenty of computational capacity for other navigation tasks such as motion planning or control.
Collapse
Affiliation(s)
| | - Fernando Caballero
- Department of System Engineering and Automation, University of Seville, Seville, Spain
| | - Antidio Viguria
- Center for Advanced Aerospace Technologies (CATEC), La Rinconada, Sevilla, Spain
| | - Anibal Ollero
- Department of System Engineering and Automation, University of Seville, Seville, Spain
| |
Collapse
|
45
|
Jung S, Cho S, Lee D, Lee H, Shim DH. A direct visual servoing-based framework for the 2016 IROS Autonomous Drone Racing Challenge. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21743] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Sunggoo Jung
- Unmanned Systems Research Group, Department of Aerospace Engineering; Korea Advanced Institute of Science and Technology; Daejeon Republic of Korea
| | - Sungwook Cho
- Unmanned Systems Research Group, Department of Aerospace Engineering; Korea Advanced Institute of Science and Technology; Daejeon Republic of Korea
| | - Dasol Lee
- Unmanned Systems Research Group, Department of Aerospace Engineering; Korea Advanced Institute of Science and Technology; Daejeon Republic of Korea
| | - Hanseob Lee
- Unmanned Systems Research Group, Department of Aerospace Engineering; Korea Advanced Institute of Science and Technology; Daejeon Republic of Korea
| | - David Hyunchul Shim
- Unmanned Systems Research Group, Department of Aerospace Engineering; Korea Advanced Institute of Science and Technology; Daejeon Republic of Korea
| |
Collapse
|
46
|
Ozaslan T, Loianno G, Keller J, Taylor CJ, Kumar V, Wozencraft JM, Hood T. Autonomous Navigation and Mapping for Inspection of Penstocks and Tunnels With MAVs. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2699790] [Citation(s) in RCA: 89] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
47
|
Guerrero-Sánchez ME, Mercado-Ravell DA, Lozano R, García-Beltrán CD. Swing-attenuation for a quadrotor transporting a cable-suspended payload. ISA TRANSACTIONS 2017; 68:433-449. [PMID: 28209426 DOI: 10.1016/j.isatra.2017.01.027] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Revised: 12/07/2016] [Accepted: 01/24/2017] [Indexed: 06/06/2023]
Abstract
This paper presents the problem of safe and fast transportation of packages by an Unmanned Aerial Vehicle (UAV) kind quadrotor. A mathematical model and a control strategy for a special class of underactuated mechanical systems, composed of a quadrotor transporting a cable-suspended payload, are proposed. The Euler-Lagrange formulation is used to obtain the dynamic model of the system, where the integrated dynamics of the quadrotor, cable and payload are considered. An Interconnection and Damping Assignment-Passivity Based Control (IDA-PBC) is chosen because of its inherent robustness against parametric uncertainty and unmodeled dynamics. Two cases are considered to obtain two different control laws, in the first case, the designed control law depends on the swing angle of the payload, in the second case the control law does not depend on it. The control objective is to transport the payload from point to point, with swing reduction along trajectory. Experimental results using monocular vision based navigation are shown to evaluate the proposed control law.
Collapse
Affiliation(s)
- M Eusebia Guerrero-Sánchez
- Sorbonne Universités, UTC CNRS UMR 7253 Heudiasyc, Compiègne, France; Centro Nacional de Investigación y desarrollo Tecnológico, Interior Internado Palmira S/N, Col. Palmira Cuernavaca, Morelos, 62490, Mexico.
| | | | - Rogelio Lozano
- Sorbonne Universités, UTC CNRS UMR 7253 Heudiasyc, Compiègne, France.
| | - C Daniel García-Beltrán
- Centro Nacional de Investigación y desarrollo Tecnológico, Interior Internado Palmira S/N, Col. Palmira Cuernavaca, Morelos, 62490, Mexico.
| |
Collapse
|
48
|
Loianno G, Brunner C, McGrath G, Kumar V. Estimation, Control, and Planning for Aggressive Flight With a Small Quadrotor With a Single Camera and IMU. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2016.2633290] [Citation(s) in RCA: 170] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
49
|
Optical-Aided Aircraft Navigation using Decoupled Visual SLAM with Range Sensor Augmentation. J INTELL ROBOT SYST 2017. [DOI: 10.1007/s10846-016-0457-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
50
|
Munguia R, Urzua S, Grau A. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles. PLoS One 2016; 11:e0167197. [PMID: 28033385 PMCID: PMC5198979 DOI: 10.1371/journal.pone.0167197] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2016] [Accepted: 11/10/2016] [Indexed: 11/18/2022] Open
Abstract
In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.
Collapse
Affiliation(s)
- Rodrigo Munguia
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara, México
- * E-mail: (RM); (AG)
| | - Sarquis Urzua
- Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara, México
| | - Antoni Grau
- Automatic Control Dept, Technical University of Catalonia, 08034 Barcelona, Spain
- * E-mail: (RM); (AG)
| |
Collapse
|