1
|
Peksa J, Mamchur D. A Review on the State of the Art in Copter Drones and Flight Control Systems. SENSORS (BASEL, SWITZERLAND) 2024; 24:3349. [PMID: 38894139 PMCID: PMC11174836 DOI: 10.3390/s24113349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 05/18/2024] [Accepted: 05/20/2024] [Indexed: 06/21/2024]
Abstract
This paper presents an overview on the state of the art in copter drones and their components. It starts by providing an introduction to unmanned aerial vehicles in general, describing their main types, and then shifts its focus mostly to multirotor drones as the most attractive for individual and research use. This paper analyzes various multirotor drone types, their construction, typical areas of implementation, and technology used underneath their construction. Finally, it looks at current challenges and future directions in drone system development, emerging technologies, and future research topics in the area. This paper concludes by highlighting some key challenges that need to be addressed before widespread adoption of drone technologies in everyday life can occur. By summarizing an up-to-date survey on the state of the art in copter drone technology, this paper will provide valuable insights into where this field is heading in terms of progress and innovation.
Collapse
Affiliation(s)
- Janis Peksa
- Information Technologies Department, Turiba University, Graudu Street 68, LV-1058 Riga, Latvia;
- Institute of Information Technology, Riga Technical University, Kalku Street 1, LV-1658 Riga, Latvia
| | - Dmytro Mamchur
- Information Technologies Department, Turiba University, Graudu Street 68, LV-1058 Riga, Latvia;
- Computer Engineering and Electronics Department, Kremenchuk Mykhailo Ostrohradskyi National University, Universitetska Street 20, 39600 Kremenchuk, Ukraine
| |
Collapse
|
2
|
Vera-Yanez D, Pereira A, Rodrigues N, Molina JP, García AS, Fernández-Caballero A. Vision-Based Flying Obstacle Detection for Avoiding Midair Collisions: A Systematic Review. J Imaging 2023; 9:194. [PMID: 37888301 PMCID: PMC10607331 DOI: 10.3390/jimaging9100194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/11/2023] [Accepted: 09/21/2023] [Indexed: 10/28/2023] Open
Abstract
This paper presents a systematic review of articles on computer-vision-based flying obstacle detection with a focus on midair collision avoidance. Publications from the beginning until 2022 were searched in Scopus, IEEE, ACM, MDPI, and Web of Science databases. From the initial 647 publications obtained, 85 were finally selected and examined. The results show an increasing interest in this topic, especially in relation to object detection and tracking. Our study hypothesizes that the widespread access to commercial drones, the improvements in single-board computers, and their compatibility with computer vision libraries have contributed to the increase in the number of publications. The review also shows that the proposed algorithms are mainly tested using simulation software and flight simulators, and only 26 papers report testing with physical flying vehicles. This systematic review highlights other gaps to be addressed in future work. Several identified challenges are related to increasing the success rate of threat detection and testing solutions in complex scenarios.
Collapse
Affiliation(s)
- Daniel Vera-Yanez
- Albacete Research Institute of Informatics, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
| | - António Pereira
- Computer Science and Communications Research Centre, School of Technology and Management, Polytechnic Institute of Leiria, 2411-901 Leiria, Portugal
- Institute of New Technologies—Leiria Office, INOV INESC INOVAÇÃO, Morro do Lena—Alto do Vieiro, 2411-901 Leiria, Portugal
| | - Nuno Rodrigues
- Computer Science and Communications Research Centre, School of Technology and Management, Polytechnic Institute of Leiria, 2411-901 Leiria, Portugal
| | - José Pascual Molina
- Albacete Research Institute of Informatics, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
| | - Arturo S. García
- Albacete Research Institute of Informatics, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
| | - Antonio Fernández-Caballero
- Albacete Research Institute of Informatics, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
| |
Collapse
|
3
|
Ottakath N, Al-Maadeed S. Vehicle Instance Segmentation Polygonal Dataset for a Private Surveillance System. SENSORS (BASEL, SWITZERLAND) 2023; 23:3642. [PMID: 37050701 PMCID: PMC10098633 DOI: 10.3390/s23073642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 03/19/2023] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
Vehicle identification and re-identification is an essential tool for traffic surveillance. However, with cameras at every corner of the street, there is a requirement for private surveillance. Automated surveillance can be achieved through computer vision tasks such as segmentation of the vehicle, classification of the make and model of the vehicle and license plate detection. To achieve a unique representation of every vehicle on the road with just the region of interest extracted, instance segmentation is applied. With the frontal part of the vehicle segmented for privacy, the vehicle make is identified along with the license plate. To achieve this, a dataset is annotated with a polygonal bounding box of its frontal region and license plate localization. State-of-the-art methods, maskRCNN, is utilized to identify the best performing model. Further, data augmentation using multiple techniques is evaluated for better generalization of the dataset. The results showed improved classification as well as a high mAP for the dataset when compared to previous approaches on the same dataset. A classification accuracy of 99.2% was obtained and segmentation was achieved with a high mAP of 99.67%. Data augmentation approaches were employed to balance and generalize the dataset of which the mosaic-tiled approach produced higher accuracy.
Collapse
|
4
|
Density-based clustering with fully-convolutional networks for crowd flow detection from drones. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
5
|
An optimal UAV height localization for maximum target coverage using improved deer hunting optimization algorithm. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2022. [DOI: 10.1007/s41315-022-00261-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
6
|
Feature fusion based on joint sparse representations and wavelets for multiview classification. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01110-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
AbstractFeature-level-based fusion has attracted much interest. Generally, a dataset can be created in different views, features, or modalities. To improve the classification rate, local information is shared among different views by various fusion methods. However, almost all the methods use the views without considering their common aspects. In this paper, wavelet transform is considered to extract high and low frequencies of the views as common aspects to improve the classification rate. The fusion method for the decomposed parts is based on joint sparse representation in which a number of scenarios can be considered. The presented approach is tested on three datasets. The results obtained by this method prove competitive performance in terms of the datasets compared to the state-of-the-art results.
Collapse
|
7
|
Pintér K, Nagy Z. Building a UAV Based System to Acquire High Spatial Resolution Thermal Imagery for Energy Balance Modelling. SENSORS (BASEL, SWITZERLAND) 2022; 22:3251. [PMID: 35590942 PMCID: PMC9101370 DOI: 10.3390/s22093251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/19/2022] [Accepted: 04/21/2022] [Indexed: 06/15/2023]
Abstract
High spatial resolution and geolocation accuracy canopy evapotranspiration (ET) maps are well suited tools for evaluation of small plot field trials. While creating such a map by use of an energy balance model is routinely performed, the acquisition of the necessary imagery at a suitable quality is still challenging. An UAV based thermal/RGB integrated imaging system was built using the RaspberryPi (RPi) microcomputer as a central unit. The imagery served as input to the two-source energy balance model pyTSEB to derive the ET map. The setup's flexibility and modularity are based on the multiple interfaces provided by the RPi and the software development kit (SDK) provided for the thermal camera. The SDK was installed on the RPi and used to trigger cameras, retrieve and store images and geolocation information from an onboard GNSS rover for PPK processing. The system allows acquisition of 8 cm spatial resolution thermal imagery from a 60 m height of flight and less than 7 cm geolocation accuracy of the mosaicked RGB imagery. Modelled latent heat flux data have been validated against latent heat fluxes measured by eddy covariance stations at two locations with RMSE of 75 W/m2 over a two-year study period.
Collapse
Affiliation(s)
- Krisztina Pintér
- MTA-MATE Agroecology Research Group, Hungarian University for Agriculture and Life Sciences, Páter K. u. 1., H-2100 Gödöllő, Hungary
| | - Zoltán Nagy
- Department of Plant Physiology and Plant Ecology, Institute of Agronomy, Hungarian University for Agriculture and Life Sciences, Páter K. u. 1., H-2100 Gödöllő, Hungary
| |
Collapse
|
8
|
Jacquet M, Kivits M, Das H, Franchi A. Motor-Level N-MPC for Cooperative Active Perception With Multiple Heterogeneous UAVs. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3143218] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
9
|
Simultaneous Localization and Mapping (SLAM) and Data Fusion in Unmanned Aerial Vehicles: Recent Advances and Challenges. DRONES 2022. [DOI: 10.3390/drones6040085] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
This article presents a survey of simultaneous localization and mapping (SLAM) and data fusion techniques for object detection and environmental scene perception in unmanned aerial vehicles (UAVs). We critically evaluate some current SLAM implementations in robotics and autonomous vehicles and their applicability and scalability to UAVs. SLAM is envisioned as a potential technique for object detection and scene perception to enable UAV navigation through continuous state estimation. In this article, we bridge the gap between SLAM and data fusion in UAVs while also comprehensively surveying related object detection techniques such as visual odometry and aerial photogrammetry. We begin with an introduction to applications where UAV localization is necessary, followed by an analysis of multimodal sensor data fusion to fuse the information gathered from different sensors mounted on UAVs. We then discuss SLAM techniques such as Kalman filters and extended Kalman filters to address scene perception, mapping, and localization in UAVs. The findings are summarized to correlate prevalent and futuristic SLAM and data fusion for UAV navigation, and some avenues for further research are discussed.
Collapse
|
10
|
Elasri M, Elharrouss O, Al-Maadeed S, Tairi H. Image Generation: A Review. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10777-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
11
|
|