1
|
Zhang T, Zhang Z, Zhu X. Detection and Control Framework for Unpiloted Ground Support Equipment within the Aircraft Stand. SENSORS (BASEL, SWITZERLAND) 2023; 24:205. [PMID: 38203067 PMCID: PMC10781360 DOI: 10.3390/s24010205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 12/27/2023] [Accepted: 12/27/2023] [Indexed: 01/12/2024]
Abstract
The rapid advancement in Unpiloted Robotic Vehicle technology has significantly influenced ground support operations at airports, marking a critical shift towards future development. This study presents a novel Unpiloted Ground Support Equipment (GSE) detection and control framework, comprising virtual channel delineation, boundary line detection, object detection, and navigation and docking control, to facilitate automated aircraft docking within the aircraft stand. Firstly, we developed a bespoke virtual channel layout for Unpiloted GSE, aligning with operational regulations and accommodating a wide spectrum of aircraft types. This layout employs turning induction markers to define essential navigation points, thereby streamlining GSE movement. Secondly, we integrated cameras and Lidar sensors to enable rapid and precise pose adjustments during docking. The introduction of a boundary line detection system, along with an optimized, lightweight YOLO algorithm, ensures swift and accurate identification of boundaries, obstacles, and docking sites. Finally, we formulated a unique control algorithm for effective obstacle avoidance and docking in varied apron conditions, guaranteeing meticulous management of vehicle pose and speed. Our experimental findings reveal an 89% detection accuracy for the virtual channel boundary line, a 95% accuracy for guiding markers, and an F1-Score of 0.845 for the YOLO object detection algorithm. The GSE achieved an average docking error of less than 3 cm and an angular deviation under 5 degrees, corroborating the efficacy and advanced nature of our proposed approach in Unpiloted GSE detection and aircraft docking.
Collapse
Affiliation(s)
| | | | - Xinping Zhu
- Air Traffic Management College, Civil Aviation Flight University of China, Deyang 618307, China; (T.Z.)
| |
Collapse
|
2
|
Yu X, Salimpour S, Queralta JP, Westerlund T. General-Purpose Deep Learning Detection and Segmentation Models for Images from a Lidar-Based Camera Sensor. SENSORS (BASEL, SWITZERLAND) 2023; 23:2936. [PMID: 36991648 PMCID: PMC10058223 DOI: 10.3390/s23062936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/03/2023] [Accepted: 03/06/2023] [Indexed: 06/19/2023]
Abstract
Over the last decade, robotic perception algorithms have significantly benefited from the rapid advances in deep learning (DL). Indeed, a significant amount of the autonomy stack of different commercial and research platforms relies on DL for situational awareness, especially vision sensors. This work explored the potential of general-purpose DL perception algorithms, specifically detection and segmentation neural networks, for processing image-like outputs of advanced lidar sensors. Rather than processing the three-dimensional point cloud data, this is, to the best of our knowledge, the first work to focus on low-resolution images with a 360° field of view obtained with lidar sensors by encoding either depth, reflectivity, or near-infrared light in the image pixels. We showed that with adequate preprocessing, general-purpose DL models can process these images, opening the door to their usage in environmental conditions where vision sensors present inherent limitations. We provided both a qualitative and quantitative analysis of the performance of a variety of neural network architectures. We believe that using DL models built for visual cameras offers significant advantages due to their much wider availability and maturity compared to point cloud-based perception.
Collapse
|
3
|
Zhang X, Fan Z, Tan X, Liu Q, Shi Y. Spatiotemporal adaptive attention 3D multiobject tracking for autonomous driving. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
|
4
|
Zhu Y, Xu R, An H, Tao C, Lu K. Anti-Noise 3D Object Detection of Multimodal Feature Attention Fusion Based on PV-RCNN. SENSORS (BASEL, SWITZERLAND) 2022; 23:233. [PMID: 36616829 PMCID: PMC9823336 DOI: 10.3390/s23010233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 12/14/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
3D object detection methods based on camera and LiDAR fusion are susceptible to environmental noise. Due to the mismatch of physical characteristics of the two sensors, the feature vectors encoded by the feature layer are in different feature spaces. This leads to the problem of feature information deviation, which has an impact on detection performance. To address this problem, a point-guided feature abstract method is presented to fuse the camera and LiDAR at first. The extracted image features and point cloud features are aggregated to keypoints for enhancing information redundancy. Second, the proposed multimodal feature attention (MFA) mechanism is used to achieve adaptive fusion of point cloud features and image features with information from multiple feature spaces. Finally, a projection-based farthest point sampling (P-FPS) is proposed to downsample the raw point cloud, which can project more keypoints onto the close object and improve the sampling rate of the point-guided image features. The 3D bounding boxes of the object is obtained by the region of interest (ROI) pooling layer and the fully connected layer. The proposed 3D object detection algorithm is evaluated on three different datasets, and the proposed algorithm achieved better detection performance and robustness when the image and point cloud data contain rain noise. The test results on a physical test platform further validate the effectiveness of the algorithm.
Collapse
Affiliation(s)
- Yuan Zhu
- School of Automotive Studies, Tongji University, Shanghai 201800, China
| | - Ruidong Xu
- School of Automotive Studies, Tongji University, Shanghai 201800, China
| | - Hao An
- School of Automotive Studies, Tongji University, Shanghai 201800, China
| | - Chongben Tao
- Suzhou Automotive Research Institute, Tsinghua University, Suzhou 215200, China
| | - Ke Lu
- School of Automotive Studies, Tongji University, Shanghai 201800, China
| |
Collapse
|
5
|
Lopac N, Jurdana I, Brnelić A, Krljan T. Application of Laser Systems for Detection and Ranging in the Modern Road Transportation and Maritime Sector. SENSORS (BASEL, SWITZERLAND) 2022; 22:5946. [PMID: 36015703 PMCID: PMC9415075 DOI: 10.3390/s22165946] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 08/05/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
The development of light detection and ranging (lidar) technology began in the 1960s, following the invention of the laser, which represents the central component of this system, integrating laser scanning with an inertial measurement unit (IMU) and Global Positioning System (GPS). Lidar technology is spreading to many different areas of application, from those in autonomous vehicles for road detection and object recognition, to those in the maritime sector, including object detection for autonomous navigation, monitoring ocean ecosystems, mapping coastal areas, and other diverse applications. This paper presents lidar system technology and reviews its application in the modern road transportation and maritime sector. Some of the better-known lidar systems for practical applications, on which current commercial models are based, are presented, and their advantages and disadvantages are described and analyzed. Moreover, current challenges and future trends of application are discussed. This paper also provides a systematic review of recent scientific research on the application of lidar system technology and the corresponding computational algorithms for data analysis, mainly focusing on deep learning algorithms, in the modern road transportation and maritime sector, based on an extensive analysis of the available scientific literature.
Collapse
Affiliation(s)
- Nikola Lopac
- Faculty of Maritime Studies, University of Rijeka, 51000 Rijeka, Croatia
- Center for Artificial Intelligence and Cybersecurity, University of Rijeka, 51000 Rijeka, Croatia
| | - Irena Jurdana
- Faculty of Maritime Studies, University of Rijeka, 51000 Rijeka, Croatia
| | - Adrian Brnelić
- Faculty of Maritime Studies, University of Rijeka, 51000 Rijeka, Croatia
| | - Tomislav Krljan
- Faculty of Maritime Studies, University of Rijeka, 51000 Rijeka, Croatia
| |
Collapse
|
6
|
Motion Estimation Using Region-Level Segmentation and Extended Kalman Filter for Autonomous Driving. REMOTE SENSING 2021. [DOI: 10.3390/rs13091828] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Motion estimation is crucial to predict where other traffic participants will be at a certain period of time, and accordingly plan the route of the ego-vehicle. This paper presents a novel approach to estimate the motion state by using region-level instance segmentation and extended Kalman filter (EKF). Motion estimation involves three stages of object detection, tracking and parameter estimate. We first use a region-level segmentation to accurately locate the object region for the latter two stages. The region-level segmentation combines color, temporal (optical flow), and spatial (depth) information as the basis for segmentation by using super-pixels and Conditional Random Field. The optical flow is then employed to track the feature points within the object area. In the stage of parameter estimate, we develop a relative motion model of the ego-vehicle and the object, and accordingly establish an EKF model for point tracking and parameter estimate. The EKF model integrates the ego-motion, optical flow, and disparity to generate optimized motion parameters. During tracking and parameter estimate, we apply edge point constraint and consistency constraint to eliminate outliers of tracking points so that the feature points used for tracking are ensured within the object body and the parameter estimates are refined by inner points. Experiments have been conducted on the KITTI dataset, and the results demonstrate that our method presents excellent performance and outperforms the other state-of-the-art methods either in object segmentation and parameter estimate.
Collapse
|
7
|
Andersson O, Doherty P, Lager M, Lindh JO, Persson L, Topp EA, Tordenlid J, Wahlberg B. WARA-PS: a research arena for public safety demonstrations and autonomous collaborative rescue robotics experimentation. AUTONOMOUS INTELLIGENT SYSTEMS 2021; 1:9. [PMCID: PMC8593105 DOI: 10.1007/s43684-021-00009-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 09/22/2021] [Indexed: 05/30/2023]
Abstract
A research arena (WARA-PS) for sensing, data fusion, user interaction, planning and control of collaborative autonomous aerial and surface vehicles in public safety applications is presented. The objective is to demonstrate scientific discoveries and to generate new directions for future research on autonomous systems for societal challenges. The enabler is a computational infrastructure with a core system architecture for industrial and academic collaboration. This includes a control and command system together with a framework for planning and executing tasks for unmanned surface vehicles and aerial vehicles. The motivating application for the demonstration is marine search and rescue operations. A state-of-art delegation framework for the mission planning together with three specific applications is also presented. The first one concerns model predictive control for cooperative rendezvous of autonomous unmanned aerial and surface vehicles. The second project is about learning to make safe real-time decisions under uncertainty for autonomous vehicles, and the third one is on robust terrain-aided navigation through sensor fusion and virtual reality tele-operation to support a GPS-free positioning system in marine environments. The research results have been experimentally evaluated and demonstrated to industry and public sector audiences at a marine test facility. It would be most difficult to do experiments on this large scale without the WARA-PS research arena. Furthermore, these demonstrator activities have resulted in effective research dissemination with high public visibility, business impact and new research collaborations between academia and industry.
Collapse
Affiliation(s)
- Olov Andersson
- Department of Computer and Information Science, Linköping University, Linköping, Sweden
| | - Patrick Doherty
- Department of Computer and Information Science, Linköping University, Linköping, Sweden
| | | | | | - Linnea Persson
- Division of Decision and Control Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Elin A. Topp
- Department of Computer Science, Lund University, Lund, Sweden
| | | | - Bo Wahlberg
- Division of Decision and Control Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Division of Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| |
Collapse
|