1
|
Chen N, Kong F, Xu W, Cai Y, Li H, He D, Qin Y, Zhang F. A self-rotating, single-actuated UAV with extended sensor field of view for autonomous navigation. Sci Robot 2023; 8:eade4538. [PMID: 36921018 DOI: 10.1126/scirobotics.ade4538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
Abstract
Uncrewed aerial vehicles (UAVs) rely heavily on visual sensors to perceive obstacles and explore environments. Current UAVs are limited in both perception capability and task efficiency because of a small sensor field of view (FoV). One solution could be to leverage self-rotation in UAVs to extend the sensor FoV without consuming extra power. This natural mechanism, induced by the counter-torque of the UAV motor, has rarely been exploited by existing autonomous UAVs because of the difficulties in design and control due to highly coupled and nonlinear dynamics and the challenges in navigation brought by the high-rate self-rotation. Here, we present powered-flying ultra-underactuated LiDAR (light detection and ranging) sensing aerial robot (PULSAR), an agile and self-rotating UAV whose three-dimensional position is fully controlled by actuating only one motor to obtain the required thrust and moment. The use of a single actuator effectively reduces the energy loss in powered flights. Consequently, PULSAR consumes 26.7% less power than the benchmarked quadrotor with the same total propeller disk area and avionic payloads while retaining a good level of agility. Augmented by an onboard LiDAR sensor, PULSAR can perform autonomous navigation in unknown environments and detect both static and dynamic obstacles in panoramic views without any external instruments. We report the experiments of PULSAR in environment exploration and multidirectional dynamic obstacle avoidance with the extended FoV via self-rotation, which could lead to increased perception capability, task efficiency, and flight safety.
Collapse
Affiliation(s)
- Nan Chen
- Department of Mechanical Engineering, University of Hong Kong, Pokfulam, Hong Kong, China
| | - Fanze Kong
- Department of Mechanical Engineering, University of Hong Kong, Pokfulam, Hong Kong, China
| | - Wei Xu
- Department of Mechanical Engineering, University of Hong Kong, Pokfulam, Hong Kong, China
| | - Yixi Cai
- Department of Mechanical Engineering, University of Hong Kong, Pokfulam, Hong Kong, China
| | - Haotian Li
- Department of Mechanical Engineering, University of Hong Kong, Pokfulam, Hong Kong, China
| | - Dongjiao He
- Department of Mechanical Engineering, University of Hong Kong, Pokfulam, Hong Kong, China
| | - Youming Qin
- Department of Mechanical Engineering, University of Hong Kong, Pokfulam, Hong Kong, China
| | - Fu Zhang
- Department of Mechanical Engineering, University of Hong Kong, Pokfulam, Hong Kong, China
| |
Collapse
|
2
|
Ochoa E, Gracias N, Istenič K, Bosch J, Cieślak P, García R. Collision Detection and Avoidance for Underwater Vehicles Using Omnidirectional Vision. SENSORS (BASEL, SWITZERLAND) 2022; 22:5354. [PMID: 35891038 PMCID: PMC9315794 DOI: 10.3390/s22145354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 07/11/2022] [Accepted: 07/13/2022] [Indexed: 06/15/2023]
Abstract
Exploration of marine habitats is one of the key pillars of underwater science, which often involves collecting images at close range. As acquiring imagery close to the seabed involves multiple hazards, the safety of underwater vehicles, such as remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs), is often compromised. Common applications for obstacle avoidance in underwater environments are often conducted with acoustic sensors, which cannot be used reliably at very short distances, thus requiring a high level of attention from the operator to avoid damaging the robot. Therefore, developing capabilities such as advanced assisted mapping, spatial awareness and safety, and user immersion in confined environments is an important research area for human-operated underwater robotics. In this paper, we present a novel approach that provides an ROV with capabilities for navigation in complex environments. By leveraging the ability of omnidirectional multi-camera systems to provide a comprehensive view of the environment, we create a 360° real-time point cloud of nearby objects or structures within a visual SLAM framework. We also develop a strategy to assess the risk of obstacles in the vicinity. We show that the system can use the risk information to generate warnings that the robot can use to perform evasive maneuvers when approaching dangerous obstacles in real-world scenarios. This system is a first step towards a comprehensive pilot assistance system that will enable inexperienced pilots to operate vehicles in complex and cluttered environments.
Collapse
|
3
|
Vargas M, Vivas C, Rubio FR, Ortega MG. Flying Chameleons: A New Concept for Minimum-Deployment, Multiple-Target Tracking Drones. SENSORS 2022; 22:s22062359. [PMID: 35336530 PMCID: PMC8955232 DOI: 10.3390/s22062359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 03/12/2022] [Accepted: 03/16/2022] [Indexed: 11/16/2022]
Abstract
In this paper, we aim to open up new perspectives in the field of autonomous aerial surveillance and target tracking systems, by exploring an alternative that, surprisingly, and to the best of the authors’ knowledge, has not been addressed in that context by the research community thus far. It can be summarized by the following two questions. Under the scope of such applications, what are the implications and possibilities offered by mounting several steerable cameras onboard of each aerial agent? Second, how can optimization algorithms benefit from this new framework, in their attempt to provide more efficient and cost-effective solutions on these areas? The paper presents the idea as an additional degree of freedom to be exploited, which can enable more efficient alternatives in the deployment of such applications. As an initial approach, the problem of the optimal positioning with respect to a set of targets of one single agent, equipped with several onboard tracking cameras with different or variable focal lengths, is addressed. As a consequence of this allowed heterogeneity in focal lengths, the notion of distance needs to be adapted into a notion of optical range, as the agent can trade longer Euclidean distances for correspondingly longer focal lengths. Moreover, the proposed optimization indices try to balance, in an optimal way, the verticality of the viewpoints along with the optical range to the targets. Under these premises, several positioning strategies are proposed and comparatively evaluated.
Collapse
Affiliation(s)
- Manuel Vargas
- Department of Automation and Systems Engineering, University of Seville, Camino de los Descubrimientos s/n, 41092 Seville, Spain; (C.V.); (F.R.R.); (M.G.O.)
- Correspondence: ; Tel.: +34-954-486-036
| | - Carlos Vivas
- Department of Automation and Systems Engineering, University of Seville, Camino de los Descubrimientos s/n, 41092 Seville, Spain; (C.V.); (F.R.R.); (M.G.O.)
| | - Francisco R. Rubio
- Department of Automation and Systems Engineering, University of Seville, Camino de los Descubrimientos s/n, 41092 Seville, Spain; (C.V.); (F.R.R.); (M.G.O.)
- Laboratory of Engineering for Energy and Environmental Sustainability, University of Seville, Camino de los Descubrimientos s/n, 41092 Seville, Spain
| | - Manuel G. Ortega
- Department of Automation and Systems Engineering, University of Seville, Camino de los Descubrimientos s/n, 41092 Seville, Spain; (C.V.); (F.R.R.); (M.G.O.)
| |
Collapse
|
4
|
Kangunde V, Jamisola RS, Theophilus EK. A review on drones controlled in real-time. ACTA ACUST UNITED AC 2021; 9:1832-1846. [PMID: 33425650 PMCID: PMC7785038 DOI: 10.1007/s40435-020-00737-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 11/11/2020] [Accepted: 11/19/2020] [Indexed: 11/25/2022]
Abstract
This paper presents related literature review on drones or unmanned aerial vehicles that are controlled in real-time. Systems in real-time control create more deterministic response such that tasks are guaranteed to be completed within a specified time. This system characteristic is very much desirable for drones that are now required to perform more sophisticated tasks. The reviewed materials presented were chosen to highlight drones that are controlled in real time, and to include technologies used in different applications of drones. Progress has been made in the development of highly maneuverable drones for applications such as monitoring, aerial mapping, military combat, agriculture, etc. The control of such highly maneuverable vehicles presents challenges such as real-time response, workload management, and complex control. This paper endeavours to discuss real-time aspects of drones control as well as possible implementation of real-time flight control system to enhance drones performance.
Collapse
|
5
|
Wang F, Zhang Z, Wang R, Zeng X, Yang X, Lv S, Zhang F, Xue D, Yan J, Zhang X. Distortion measurement of optical system using phase diffractive beam splitter. OPTICS EXPRESS 2019; 27:29803-29816. [PMID: 31684237 DOI: 10.1364/oe.27.029803] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Accepted: 09/06/2019] [Indexed: 06/10/2023]
Abstract
Traditional methods for distortion measurement of large-aperture optical systems are time-consuming and ineffective because they require each field of view to be individually measured using a high-precision rotating platform. In this study, a new method that uses a phase diffractive beam splitter (DBS) is proposed to measure the distortion of optical systems, which has great potential application for the large-aperture optical system. The proposed method has a very high degree of accuracy and is extremely economical. A high-precision calibration method is proposed to measure the angular distribution of the DBS. The uncertainty analysis of the factors involved in the measurement process has been performed to highlight the low level of errors in the measurement methodology. Results show that high-precision measurements of the focal length and distortion were successfully achieved with high efficiency. The proposed method can be used for large-aperture wide-angle optical systems such as those used for aerial mapping applications.
Collapse
|
6
|
Computer Vision in Autonomous Unmanned Aerial Vehicles—A Systematic Mapping Study. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9153196] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Personal assistant robots provide novel technological solutions in order to monitor people’s activities, helping them in their daily lives. In this sense, unmanned aerial vehicles (UAVs) can also bring forward a present and future model of assistant robots. To develop aerial assistants, it is necessary to address the issue of autonomous navigation based on visual cues. Indeed, navigating autonomously is still a challenge in which computer vision technologies tend to play an outstanding role. Thus, the design of vision systems and algorithms for autonomous UAV navigation and flight control has become a prominent research field in the last few years. In this paper, a systematic mapping study is carried out in order to obtain a general view of this subject. The study provides an extensive analysis of papers that address computer vision as regards the following autonomous UAV vision-based tasks: (1) navigation, (2) control, (3) tracking or guidance, and (4) sense-and-avoid. The works considered in the mapping study—a total of 144 papers from an initial set of 2081—have been classified under the four categories above. Moreover, type of UAV, features of the vision systems employed and validation procedures are also analyzed. The results obtained make it possible to draw conclusions about the research focuses, which UAV platforms are mostly used in each category, which vision systems are most frequently employed, and which types of tests are usually performed to validate the proposed solutions. The results of this systematic mapping study demonstrate the scientific community’s growing interest in the development of vision-based solutions for autonomous UAVs. Moreover, they will make it possible to study the feasibility and characteristics of future UAVs taking the role of personal assistants.
Collapse
|
7
|
Zhang C, Liu Y, Wang F, Xia Y, Zhang W. VINS-MKF:A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation. SENSORS 2018; 18:s18114036. [PMID: 30463261 PMCID: PMC6263887 DOI: 10.3390/s18114036] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2018] [Revised: 11/13/2018] [Accepted: 11/14/2018] [Indexed: 11/16/2022]
Abstract
State estimation is crucial for robot autonomy, visual odometry (VO) has received significant attention in the robotics field because it can provide accurate state estimation. However, the accuracy and robustness of most existing VO methods are degraded in complex conditions, due to the limited field of view (FOV) of the utilized camera. In this paper, we present a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment. We first modify the monocular ORBSLAM (Oriented FAST and Rotated BRIEF Simultaneous Localization and Mapping) to multiple fisheye cameras alongside an inertial measurement unit (IMU) to provide large FOV visual-inertial information. Then, a novel VO framework is proposed to ensure the efficiency of state estimation, by adopting a GPU (Graphics Processing Unit) based feature extraction method and parallelizing the feature extraction thread that is separated from the tracking thread with the mapping thread. Finally, a nonlinear optimization method is formulated for accurate state estimation, which is characterized as being multi-keyframe, tightly-coupled and visual-inertial. In addition, accurate initialization and a novel MultiCol-IMU camera model are coupled to further improve the performance of VINS-MKF. To the best of our knowledge, it’s the first tightly-coupled multi-keyframe visual-inertial odometry that joins measurements from multiple fisheye cameras and IMU. The performance of the VINS-MKF was validated by extensive experiments using home-made datasets, and it showed improved accuracy and robustness over the state-of-art VINS-Mono.
Collapse
Affiliation(s)
- Chaofan Zhang
- Institute of Applied Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China.
- Science Island Branch of Graduate School, University of Science and Technology of China, Hefei 230026, China.
| | - Yong Liu
- Institute of Applied Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China.
| | - Fan Wang
- Institute of Applied Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China.
- Science Island Branch of Graduate School, University of Science and Technology of China, Hefei 230026, China.
| | - Yingwei Xia
- Institute of Applied Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China.
| | - Wen Zhang
- Institute of Applied Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China.
| |
Collapse
|
8
|
Labbé M, Michaud F. RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. J FIELD ROBOT 2018. [DOI: 10.1002/rob.21831] [Citation(s) in RCA: 259] [Impact Index Per Article: 43.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Mathieu Labbé
- Department of Electrical Engineering and Computer Engineering; Interdisciplinary Institute of Technological Innovation (3IT), Université de Sherbrooke; Sherbrooke Québec Canada
| | - François Michaud
- Department of Electrical Engineering and Computer Engineering; Interdisciplinary Institute of Technological Innovation (3IT), Université de Sherbrooke; Sherbrooke Québec Canada
| |
Collapse
|
9
|
Liu J, Hao K, Ding Y, Yang S, Gao L. Multi-State Self-Learning Template Library Updating Approach for Multi-Camera Human Tracking in Complex Scenes. INT J PATTERN RECOGN 2017. [DOI: 10.1142/s0218001417550163] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In multi-camera video tracking, the tracking scene and tracking-target appearance can become complex, and current tracking methods use entirely different databases and evaluation criteria. Herein, for the first time to our knowledge, we present a universally applicable template library updating approach for multi-camera human tracking called multi-state self-learning template library updating (RS-TLU), which can be applied in different multi-camera tracking algorithms. In RS-TLU, self-learning divides tracking results into three states, namely steady state, gradually changing state, and suddenly changing state, by using the similarity of objects with historical templates and instantaneous templates because every state requires a different decision strategy. Subsequently, the tracking results for each state are judged and learned with motion and occlusion information. Finally, the correct template is chosen in the robust template library. We investigate the effectiveness of the proposed method using three databases and 42 test videos, and calculate the number of false positives, false matches, and missing tracking targets. Experimental results demonstrate that, in comparison with the state-of-the-art algorithms for 15 complex scenes, our RS-TLU approach effectively improves the number of correct target templates and reduces the number of similar templates and error templates in the template library.
Collapse
Affiliation(s)
- Jian Liu
- Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, College of Information Science and Technology, Donghua University, Shanghai 201620, P. R. China
| | - Kuangrong Hao
- Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, College of Information Science and Technology, Donghua University, Shanghai 201620, P. R. China
| | - Yongsheng Ding
- Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, College of Information Science and Technology, Donghua University, Shanghai 201620, P. R. China
| | - Shiyu Yang
- Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, College of Information Science and Technology, Donghua University, Shanghai 201620, P. R. China
| | - Lei Gao
- CSIRO, Private Mail Bag 2, Glen Osmond, SA 5064, Australia
| |
Collapse
|
10
|
Abstract
This paper solves the classical problem of simultaneous localization and mapping (SLAM) in a fashion that avoids linearized approximations altogether. Based on the creation of virtual synthetic measurements, the algorithm uses a linear time-varying Kalman observer, bypassing errors and approximations brought by the linearization process in traditional extended Kalman filtering SLAM. Convergence rates of the algorithm are established using contraction analysis. Different combinations of sensor information can be exploited, such as bearing measurements, range measurements, optical flow, or time-to-contact. SLAM-DUNK, a more advanced version of the algorithm in global coordinates, exploits the conditional independence property of the SLAM problem, decoupling the covariance matrices between different landmarks and reducing computational complexity to O(n). As illustrated in simulations, the proposed algorithm can solve SLAM problems in both 2D and 3D scenarios with guaranteed convergence rates in a full nonlinear context.
Collapse
Affiliation(s)
- Feng Tan
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology, USA
| | - Winfried Lohmiller
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology, USA
| | | |
Collapse
|
11
|
Liu J, Hao K, Ding Y, Yang S, Gao L. Moving human tracking across multi-camera based on artificial immune random forest and improved colour-texture feature fusion. THE IMAGING SCIENCE JOURNAL 2017. [DOI: 10.1080/13682199.2017.1319608] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
12
|
Kanellakis C, Nikolakopoulos G. Survey on Computer Vision for UAVs: Current Developments and Trends. J INTELL ROBOT SYST 2017. [DOI: 10.1007/s10846-017-0483-z] [Citation(s) in RCA: 196] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
13
|
Tribou MJ, Harmat A, Wang DW, Sharf I, Waslander SL. Multi-camera parallel tracking and mapping with non-overlapping fields of view. Int J Rob Res 2015. [DOI: 10.1177/0278364915571429] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A novel real-time pose estimation system is presented for solving the visual simultaneous localization and mapping problem using a rigid set of central cameras arranged such that there is no overlap in their fields-of-view. A new parameterization for point feature position using a spherical coordinate update is formulated which isolates system parameters dependent on global scale, allowing the shape parameters of the system to converge despite the scale remaining uncertain. Furthermore, an initialization scheme is proposed from which the optimization will converge accurately using only the measurements from the cameras at the first time step. The algorithm is implemented and verified in experiments with a camera cluster constructed using multiple perspective cameras mounted on a multirotor aerial vehicle and augmented with tracking markers to collect high-precision ground-truth motion measurements from an optical indoor positioning system. The accuracy and performance of the proposed pose estimation system are confirmed for various motion profiles in both indoor and challenging outdoor environments, despite no overlap in the camera fields-of-view.
Collapse
Affiliation(s)
- Michael J. Tribou
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Adam Harmat
- Department of Mechanical Engineering, McGill University, Montreal, QC, Canada
| | - David W.L. Wang
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Inna Sharf
- Department of Mechanical Engineering, McGill University, Montreal, QC, Canada
| | - Steven L. Waslander
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|