1
|
Saldivar-Carranza ED, Desai J, Thompson A, Taylor M, Sturdevant J, Bullock DM. Vehicle and Pedestrian Traffic Signal Performance Measures Using LiDAR-Derived Trajectory Data. SENSORS (BASEL, SWITZERLAND) 2024; 24:6410. [PMID: 39409450 PMCID: PMC11479351 DOI: 10.3390/s24196410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 09/18/2024] [Accepted: 09/30/2024] [Indexed: 10/20/2024]
Abstract
Light Detection and Ranging (LiDAR) sensors at signalized intersections can accurately track the movement of virtually all objects passing through at high sampling rates. This study presents methodologies to estimate vehicle and pedestrian traffic signal performance measures using LiDAR trajectory data. Over 15,000,000 vehicle and 170,000 pedestrian waypoints detected during a 24 h period at an intersection in Utah are analyzed to describe the proposed techniques. Sampled trajectories are linear referenced to generate Purdue Probe Diagrams (PPDs). Vehicle-based PPDs are used to estimate movement level turning counts, 85th percentile queue lengths (85QL), arrivals on green (AOG), highway capacity manual (HCM) level of service (LOS), split failures (SF), and downstream blockage (DSB) by time of day (TOD). Pedestrian-based PPDs are used to estimate wait times and the proportion of people that traverse multiple crosswalks. Although vehicle signal performance can be estimated from several days of aggregated connected vehicle (CV) data, LiDAR data provides the ability to measure performance in real time. Furthermore, LiDAR can measure pedestrian speeds. At the studied location, the 15th percentile pedestrian walking speed was estimated to be 3.9 ft/s. The ability to directly measure these pedestrian speeds allows agencies to consider alternative crossing times than those suggested by the Manual on Uniform Traffic Control Devices (MUTCD).
Collapse
Affiliation(s)
- Enrique D. Saldivar-Carranza
- Joint Transportation Research Program, Lyles School of Civil and Construction Engineering, Purdue University, West Lafayette, IN 47907, USA; (J.D.); (A.T.); (D.M.B.)
| | - Jairaj Desai
- Joint Transportation Research Program, Lyles School of Civil and Construction Engineering, Purdue University, West Lafayette, IN 47907, USA; (J.D.); (A.T.); (D.M.B.)
| | - Andrew Thompson
- Joint Transportation Research Program, Lyles School of Civil and Construction Engineering, Purdue University, West Lafayette, IN 47907, USA; (J.D.); (A.T.); (D.M.B.)
| | - Mark Taylor
- Utah Department of Transportation, Traffic Operations Center, 2060 S 2760 W, Salt Lake City, UT 84104, USA;
| | - James Sturdevant
- Indiana Department of Transportation, Traffic Management Center, 8620 East 21st St., Indianapolis, IN 46219, USA;
| | - Darcy M. Bullock
- Joint Transportation Research Program, Lyles School of Civil and Construction Engineering, Purdue University, West Lafayette, IN 47907, USA; (J.D.); (A.T.); (D.M.B.)
| |
Collapse
|
2
|
Lei X, Tang C, Tang X. High-precision docking of wheelchair/beds through LIDAR and visual information. Front Bioeng Biotechnol 2024; 12:1446512. [PMID: 39295848 PMCID: PMC11408198 DOI: 10.3389/fbioe.2024.1446512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Accepted: 08/23/2024] [Indexed: 09/21/2024] Open
Abstract
To address the low docking accuracy of existing robotic wheelchair/beds, this study proposes an automatic docking framework integrating light detection and ranging (LIDAR), visual positioning, and laser ranging. First, a mobile chassis was designed for an intelligent wheelchair/bed with independent four-wheel steering. In the remote guidance phase, the simultaneous localization and mapping (SLAM) algorithm was employed to construct an environment map, achieving remote guidance and obstacle avoidance through the integration of LIDAR, inertial measurement unit (IMU), and an improved A* algorithm. In the mid-range pose determination and positioning phase, the IMU module and vision system on the wheelchair/bed collected coordinate and path information marked by quick response (QR) code labels to adjust the relative pose between the wheelchair/bed and bed frame. Finally, in the short-range precise docking phase, laser triangulation ranging was utilized to achieve precise automatic docking between the wheelchair/bed and the bed frame. The results of multiple experiments show that the proposed method significantly improves the docking accuracy of the intelligent wheelchair/bed.
Collapse
Affiliation(s)
- Xiangxiao Lei
- School of Electronic Information Engineering, Changsha Social Work College, Changsha, China
| | - Chunxia Tang
- School of Electronic Information Engineering, Changsha Social Work College, Changsha, China
| | - Xiaomei Tang
- Hunan Victor Petrotech Service Co., Ltd., Changsha, China
| |
Collapse
|
3
|
Tolba MA, Kamal HA. SDC-Net++: End-to-End Crash Detection and Action Control for Self-Driving Car Deep-IoT-Based System. SENSORS (BASEL, SWITZERLAND) 2024; 24:3805. [PMID: 38931589 PMCID: PMC11207780 DOI: 10.3390/s24123805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 06/03/2024] [Accepted: 06/08/2024] [Indexed: 06/28/2024]
Abstract
Few prior works study self-driving cars by deep learning with IoT collaboration. SDC-Net, which is an end-to-end multitask self-driving car camera cocoon IoT-based system, is one of the research areas that tackles this direction. However, by design, SDC-Net is not able to identify the accident locations; it only classifies whether a scene is a crash scene or not. In this work, we introduce an enhanced design for the SDC-Net system by (1) replacing the classification network with a detection one, (2) adapting our benchmark dataset labels built on the CARLA simulator to include the vehicles' bounding boxes while keeping the same training, validation, and testing samples, and (3) modifying the shared information via IoT to include the accident location. We keep the same path planning and automatic emergency braking network, the digital automation platform, and the input representations to formulate the comparative study. The SDC-Net++ system is proposed to (1) output the relevant control actions, especially in case of accidents: accelerate, decelerate, maneuver, and brake, and (2) share the most critical information to the connected vehicles via IoT, especially the accident locations. A comparative study is also conducted between SDC-Net and SDC-Net++ with the same input representations: front camera only, panorama and bird's eye views, and with single-task networks, crash avoidance only, and multitask networks. The multitask network with a BEV input representation outperforms the nearest representation in precision, recall, f1-score, and accuracy by more than 15.134%, 12.046%, 13.593%, and 5%, respectively. The SDC-Net++ multitask network with BEV outperforms SDC-Net multitask with BEV in precision, recall, f1-score, accuracy, and average MSE by more than 2.201%, 2.8%, 2.505%, 2%, and 18.677%, respectively.
Collapse
Affiliation(s)
- Mohammed Abdou Tolba
- Department of Electronics and Communications Engineering, Faculty of Engineering, Cairo University, Cairo 12613, Egypt
| | - Hanan Ahmed Kamal
- Department of Electronics and Communications Engineering, Faculty of Engineering, Cairo University, Cairo 12613, Egypt
| |
Collapse
|
4
|
Gyrichidi N, Romanov AM, Trofimov OV, Eroshenko SA, Matrenin PV, Khalyasmaa AI. GNSS-Based Narrow-Angle UV Camera Targeting: Case Study of a Low-Cost MAD Robot. SENSORS (BASEL, SWITZERLAND) 2024; 24:3494. [PMID: 38894285 PMCID: PMC11175354 DOI: 10.3390/s24113494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 05/21/2024] [Accepted: 05/27/2024] [Indexed: 06/21/2024]
Abstract
One of the key challenges in Multi-Spectral Automatic Diagnostic (MAD) robot design is the precise targeting of narrow-angle cameras on a specific part of the equipment. The paper shows that a low-cost MAD robot, whose navigation system is based on open-source ArduRover firmware and a pair of low-cost Ublox F9P GNSS receivers, can inspect the 8 × 4 degree ultraviolet camera bounding the targeting error within 0.5 degrees. To achieve this result, we propose a new targeting procedure that can be implemented without any modifications in ArduRover firmware and outperforms more expensive solutions based on LiDAR SLAM and UWB. This paper will be interesting to the developers of robotic systems for power equipment inspection because it proposes a simple and effective solution for MAD robots' camera targeting and provides the first quantitative analysis of the GNSS reception conditions during power equipment inspection. This analysis is based on the experimental results collected during the inspection of the overhead power transmission lines and equipment inspections on the open switchgear of different power plants. Moreover, it includes not only satellite, dilution of precision, and positioning/heading estimation accuracy but also the direct measurements of angular errors that could be achieved on operating power plants using GNSS-only camera targeting.
Collapse
Affiliation(s)
- Ntmitrii Gyrichidi
- Institute of Artificial Intelligence, MIREA—Russian Technological University (RTU MIREA), 119454 Moscow, Russia; (N.G.)
| | - Alexey M. Romanov
- Institute of Artificial Intelligence, MIREA—Russian Technological University (RTU MIREA), 119454 Moscow, Russia; (N.G.)
| | - Oleg V. Trofimov
- Institute of Artificial Intelligence, MIREA—Russian Technological University (RTU MIREA), 119454 Moscow, Russia; (N.G.)
| | - Stanislav A. Eroshenko
- Ural Power Engineering Institute, Ural Federal University Named after the First President of Russia B.N. Yeltsin, 620002 Ekaterinburg, Russia; (S.A.E.); (P.V.M.); (A.I.K.)
| | - Pavel V. Matrenin
- Ural Power Engineering Institute, Ural Federal University Named after the First President of Russia B.N. Yeltsin, 620002 Ekaterinburg, Russia; (S.A.E.); (P.V.M.); (A.I.K.)
- Power Supply Systems Department, Novosibirsk State Technical University, 630073 Novosibirsk, Russia
| | - Alexandra I. Khalyasmaa
- Ural Power Engineering Institute, Ural Federal University Named after the First President of Russia B.N. Yeltsin, 620002 Ekaterinburg, Russia; (S.A.E.); (P.V.M.); (A.I.K.)
| |
Collapse
|
5
|
Naich AY, Carrión JR. LiDAR-Based Intensity-Aware Outdoor 3D Object Detection. SENSORS (BASEL, SWITZERLAND) 2024; 24:2942. [PMID: 38733047 PMCID: PMC11086319 DOI: 10.3390/s24092942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 04/28/2024] [Accepted: 05/01/2024] [Indexed: 05/13/2024]
Abstract
LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can be susceptible to environmental noise due to adverse weather conditions or the presence of highly scattering media. In this work, we propose an intensity-aware voxel encoder for robust 3D object detection. The proposed voxel encoder generates an intensity histogram that describes the distribution of point intensities within a voxel and is used to enhance the voxel feature set. We integrate this intensity-aware encoder into an efficient single-stage voxel-based detector for 3D object detection. Experimental results obtained using the KITTI dataset show that our method achieves comparable results with respect to the state-of-the-art method for car objects in 3D detection and from a bird's-eye view and superior results for pedestrian and cyclic objects. Furthermore, our model can achieve a detection rate of 40.7 FPS during inference time, which is higher than that of the state-of-the-art methods and incurs a lower computational cost.
Collapse
Affiliation(s)
- Ammar Yasir Naich
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
| | - Jesús Requena Carrión
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
| |
Collapse
|
6
|
Cong P, Li J, Liu J, Xiao Y, Zhang X. SEG-SLAM: Dynamic Indoor RGB-D Visual SLAM Integrating Geometric and YOLOv5-Based Semantic Information. SENSORS (BASEL, SWITZERLAND) 2024; 24:2102. [PMID: 38610313 PMCID: PMC11014023 DOI: 10.3390/s24072102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 04/14/2024]
Abstract
Simultaneous localisation and mapping (SLAM) is crucial in mobile robotics. Most visual SLAM systems assume that the environment is static. However, in real life, there are many dynamic objects, which affect the accuracy and robustness of these systems. To improve the performance of visual SLAM systems, this study proposes a dynamic visual SLAM (SEG-SLAM) system based on the orientated FAST and rotated BRIEF (ORB)-SLAM3 framework and you only look once (YOLO)v5 deep-learning method. First, based on the ORB-SLAM3 framework, the YOLOv5 deep-learning method is used to construct a fusion module for target detection and semantic segmentation. This module can effectively identify and extract prior information for obviously and potentially dynamic objects. Second, differentiated dynamic feature point rejection strategies are developed for different dynamic objects using the prior information, depth information, and epipolar geometry method. Thus, the localisation and mapping accuracy of the SEG-SLAM system is improved. Finally, the rejection results are fused with the depth information, and a static dense 3D mapping without dynamic objects is constructed using the Point Cloud Library. The SEG-SLAM system is evaluated using public TUM datasets and real-world scenarios. The proposed method is more accurate and robust than current dynamic visual SLAM algorithms.
Collapse
Affiliation(s)
- Peichao Cong
- School of Mechanical and Automotive Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China
| | - Jiaxing Li
- School of Mechanical and Automotive Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China
| | - Junjie Liu
- School of Mechanical and Automotive Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China
| | - Yixuan Xiao
- School of Mechanical and Automotive Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China
| | - Xin Zhang
- School of Mechanical and Automotive Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China
| |
Collapse
|
7
|
Xu A, Gao J, Sui X, Wang C, Shi Z. LiDAR Dynamic Target Detection Based on Multidimensional Features. SENSORS (BASEL, SWITZERLAND) 2024; 24:1369. [PMID: 38474905 DOI: 10.3390/s24051369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 02/17/2024] [Accepted: 02/18/2024] [Indexed: 03/14/2024]
Abstract
To address the limitations of LiDAR dynamic target detection methods, which often require heuristic thresholding, indirect computational assistance, supplementary sensor data, or postdetection, we propose an innovative method based on multidimensional features. Using the differences between the positions and geometric structures of point cloud clusters scanned by the same target in adjacent frame point clouds, the motion states of the point cloud clusters are comprehensively evaluated. To enable the automatic precision pairing of point cloud clusters from adjacent frames of the same target, a double registration algorithm is proposed for point cloud cluster centroids. The iterative closest point (ICP) algorithm is employed for approximate interframe pose estimation during coarse registration. The random sample consensus (RANSAC) and four-parameter transformation algorithms are employed to obtain precise interframe pose relations during fine registration. These processes standardize the coordinate systems of adjacent point clouds and facilitate the association of point cloud clusters from the same target. Based on the paired point cloud cluster, a classification feature system is used to construct the XGBoost decision tree. To enhance the XGBoost training efficiency, a Spearman's rank correlation coefficient-bidirectional search for a dimensionality reduction algorithm is proposed to expedite the optimal classification feature subset construction. After preliminary outcomes are generated by XGBoost, a double Boyer-Moore voting-sliding window algorithm is proposed to refine the final LiDAR dynamic target detection accuracy. To validate the efficacy and efficiency of our method in LiDAR dynamic target detection, an experimental platform is established. Real-world data are collected and pertinent experiments are designed. The experimental results illustrate the soundness of our method. The LiDAR dynamic target correct detection rate is 92.41%, the static target error detection rate is 1.43%, and the detection efficiency is 0.0299 s. Our method exhibits notable advantages over open-source comparative methods, achieving highly efficient and precise LiDAR dynamic target detection.
Collapse
Affiliation(s)
- Aigong Xu
- School of Geomatics, Liaoning Technical University, Fuxin 123000, China
| | - Jiaxin Gao
- School of Geomatics, Liaoning Technical University, Fuxin 123000, China
| | - Xin Sui
- School of Geomatics, Liaoning Technical University, Fuxin 123000, China
| | - Changqiang Wang
- School of Geomatics, Liaoning Technical University, Fuxin 123000, China
| | - Zhengxu Shi
- School of Geomatics, Liaoning Technical University, Fuxin 123000, China
| |
Collapse
|
8
|
Peng H, Zhao Z, Wang L. A Review of Dynamic Object Filtering in SLAM Based on 3D LiDAR. SENSORS (BASEL, SWITZERLAND) 2024; 24:645. [PMID: 38276337 PMCID: PMC10821332 DOI: 10.3390/s24020645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/17/2024] [Accepted: 01/17/2024] [Indexed: 01/27/2024]
Abstract
SLAM (Simultaneous Localization and Mapping) based on 3D LiDAR (Laser Detection and Ranging) is an expanding field of research with numerous applications in the areas of autonomous driving, mobile robotics, and UAVs (Unmanned Aerial Vehicles). However, in most real-world scenarios, dynamic objects can negatively impact the accuracy and robustness of SLAM. In recent years, the challenge of achieving optimal SLAM performance in dynamic environments has led to the emergence of various research efforts, but there has been relatively little relevant review. This work delves into the development process and current state of SLAM based on 3D LiDAR in dynamic environments. After analyzing the necessity and importance of filtering dynamic objects in SLAM, this paper is developed from two dimensions. At the solution-oriented level, mainstream methods of filtering dynamic targets in 3D point cloud are introduced in detail, such as the ray-tracing-based approach, the visibility-based approach, the segmentation-based approach, and others. Then, at the problem-oriented level, this paper classifies dynamic objects and summarizes the corresponding processing strategies for different categories in the SLAM framework, such as online real-time filtering, post-processing after the mapping, and Long-term SLAM. Finally, the development trends and research directions of dynamic object filtering in SLAM based on 3D LiDAR are discussed and predicted.
Collapse
Affiliation(s)
- Hongrui Peng
- School of Resources and Safety Engineering, Central South University, Changsha 410083, China; (H.P.); (Z.Z.)
| | - Ziyu Zhao
- School of Resources and Safety Engineering, Central South University, Changsha 410083, China; (H.P.); (Z.Z.)
| | - Liguan Wang
- School of Resources and Safety Engineering, Central South University, Changsha 410083, China; (H.P.); (Z.Z.)
- Changsha Digital Mine Co., Ltd., Changsha 410221, China
| |
Collapse
|
9
|
Zhang J, Chen S, Xue Q, Yang J, Ren G, Zhang W, Li F. LeGO-LOAM-FN: An Improved Simultaneous Localization and Mapping Method Fusing LeGO-LOAM, Faster_GICP and NDT in Complex Orchard Environments. SENSORS (BASEL, SWITZERLAND) 2024; 24:551. [PMID: 38257644 PMCID: PMC11154502 DOI: 10.3390/s24020551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/01/2024] [Accepted: 01/13/2024] [Indexed: 01/24/2024]
Abstract
To solve the problem of cumulative errors when robots build maps in complex orchard environments due to their large scene size, similar features, and unstable motion, this study proposes a loopback registration algorithm based on the fusion of Faster Generalized Iterative Closest Point (Faster_GICP) and Normal Distributions Transform (NDT). First, the algorithm creates a K-Dimensional tree (KD-Tree) structure to eliminate the dynamic obstacle point clouds. Then, the method uses a two-step point filter to reduce the number of feature points of the current frame used for matching and the number of data used for optimization. It also calculates the matching degree of normal distribution probability by meshing the point cloud, and optimizes the precision registration using the Hessian matrix method. In the complex orchard environment with multiple loopback events, the root mean square error and standard deviation of the trajectory of the LeGO-LOAM-FN algorithm are 0.45 m and 0.26 m which are 67% and 73% higher than those of the loopback registration algorithm in the Lightweight and Ground-Optimized LiDAR Odometry and Mapping on Variable Terrain (LeGO-LOAM), respectively. The study proves that this method effectively reduces the influence of the cumulative error, and provides technical support for intelligent operation in the orchard environment.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Fuzhong Li
- School of Software Technology, Shanxi Agricultural University, Jinzhong 030801, China; (J.Z.); (S.C.); (Q.X.); (J.Y.); (G.R.); (W.Z.)
| |
Collapse
|
10
|
Wang J, Bi S, Liu W, Zhou L, Li T, Macleod I, Leach R. Stitching Locally Fitted T-Splines for Fast Fitting of Large-Scale Freeform Point Clouds. SENSORS (BASEL, SWITZERLAND) 2023; 23:9816. [PMID: 38139662 PMCID: PMC10748178 DOI: 10.3390/s23249816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/08/2023] [Accepted: 12/12/2023] [Indexed: 12/24/2023]
Abstract
Parametric splines are popular tools for precision optical metrology of complex freeform surfaces. However, as a promising topologically unconstrained solution, existing T-spline fitting techniques, such as improved global fitting, local fitting, and split-connect algorithms, still suffer the problems of low computational efficiency, especially in the case of large data scales and high accuracy requirements. This paper proposes a speed-improved algorithm for fast, large-scale freeform point cloud fitting by stitching locally fitted T-splines through three steps of localized operations. Experiments show that the proposed algorithm produces a three-to-eightfold efficiency improvement from the global and local fitting algorithms, and a two-to-fourfold improvement from the latest split-connect algorithm, in high-accuracy and large-scale fitting scenarios. A classical Lena image study showed that the algorithm is at least twice as fast as the split-connect algorithm using fewer than 80% control points of the latter.
Collapse
Affiliation(s)
- Jian Wang
- State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China; (J.W.); (S.B.); (W.L.)
| | - Sheng Bi
- State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China; (J.W.); (S.B.); (W.L.)
| | - Wenkang Liu
- State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China; (J.W.); (S.B.); (W.L.)
| | - Liping Zhou
- State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China; (J.W.); (S.B.); (W.L.)
| | - Tukun Li
- Centre for Precision Technologies, University of Huddersfield, Huddersfield HD1 3DH, UK;
| | - Iain Macleod
- IMA Ltd., 29 Clay Lane, Hale, Cheshire WA15 8PJ, UK
| | - Richard Leach
- Faculty of Engineering, University of Nottingham, Nottingham NG8 1BB, UK
| |
Collapse
|
11
|
Lomas-Barrie V, Suarez-Espinoza M, Hernandez-Chavez G, Neme A. A New Method for Classifying Scenes for Simultaneous Localization and Mapping Using the Boundary Object Function Descriptor on RGB-D Points. SENSORS (BASEL, SWITZERLAND) 2023; 23:8836. [PMID: 37960535 PMCID: PMC10648618 DOI: 10.3390/s23218836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 10/19/2023] [Accepted: 10/23/2023] [Indexed: 11/15/2023]
Abstract
Scene classification in autonomous navigation is a highly complex task due to variations, such as light conditions and dynamic objects, in the inspected scenes; it is also a challenge for small-factor computers to run modern and highly demanding algorithms. In this contribution, we introduce a novel method for classifying scenes in simultaneous localization and mapping (SLAM) using the boundary object function (BOF) descriptor on RGB-D points. Our method aims to reduce complexity with almost no performance cost. All the BOF-based descriptors from each object in a scene are combined to define the scene class. Instead of traditional image classification methods such as ORB or SIFT, we use the BOF descriptor to classify scenes. Through an RGB-D camera, we capture points and adjust them onto layers than are perpendicular to the camera plane. From each plane, we extract the boundaries of objects such as furniture, ceilings, walls, or doors. The extracted features compose a bag of visual words classified by a support vector machine. The proposed method achieves almost the same accuracy in scene classification as a SIFT-based algorithm and is 2.38× faster. The experimental results demonstrate the effectiveness of the proposed method in terms of accuracy and robustness for the 7-Scenes and SUNRGBD datasets.
Collapse
Affiliation(s)
- Victor Lomas-Barrie
- Instituto de Investigaciones en Matematicas Aplicadas y en Sistemas, Universidad Nacional Autonoma de Mexico, Mexico City 04510, Mexico;
| | - Mario Suarez-Espinoza
- Facultad de Ingeniería, Universidad Nacional Autonoma de Mexico, Mexico City 04510, Mexico;
| | | | - Antonio Neme
- Instituto de Investigaciones en Matematicas Aplicadas y en Sistemas, Universidad Nacional Autonoma de Mexico, Mexico City 04510, Mexico;
| |
Collapse
|
12
|
Ahmed MF, Masood K, Fremont V, Fantoni I. Active SLAM: A Review on Last Decade. SENSORS (BASEL, SWITZERLAND) 2023; 23:8097. [PMID: 37836928 PMCID: PMC10575033 DOI: 10.3390/s23198097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 09/18/2023] [Accepted: 09/21/2023] [Indexed: 10/15/2023]
Abstract
This article presents a comprehensive review of the Active Simultaneous Localization and Mapping (A-SLAM) research conducted over the past decade. It explores the formulation, applications, and methodologies employed in A-SLAM, particularly in trajectory generation and control-action selection, drawing on concepts from Information Theory (IT) and the Theory of Optimal Experimental Design (TOED). This review includes both qualitative and quantitative analyses of various approaches, deployment scenarios, configurations, path-planning methods, and utility functions within A-SLAM research. Furthermore, this article introduces a novel analysis of Active Collaborative SLAM (AC-SLAM), focusing on collaborative aspects within SLAM systems. It includes a thorough examination of collaborative parameters and approaches, supported by both qualitative and statistical assessments. This study also identifies limitations in the existing literature and suggests potential avenues for future research. This survey serves as a valuable resource for researchers seeking insights into A-SLAM methods and techniques, offering a current overview of A-SLAM formulation.
Collapse
Affiliation(s)
- Muhammad Farhan Ahmed
- Laboratoire des Sciences du Numérique de Nantes (LS2N), CNRS, Ecole Centrale de Nantes, 1 Rue de la Noë, 44300 Nantes, France; (M.F.A.); (I.F.)
| | - Khayyam Masood
- Capgemini Engineering, 4 Avenue Didier Daurat, 31700 Blagnac, France;
| | - Vincent Fremont
- Laboratoire des Sciences du Numérique de Nantes (LS2N), CNRS, Ecole Centrale de Nantes, 1 Rue de la Noë, 44300 Nantes, France; (M.F.A.); (I.F.)
| | - Isabelle Fantoni
- Laboratoire des Sciences du Numérique de Nantes (LS2N), CNRS, Ecole Centrale de Nantes, 1 Rue de la Noë, 44300 Nantes, France; (M.F.A.); (I.F.)
| |
Collapse
|
13
|
Wang X, Sun Y, Xie Y, Bin J, Xiao J. Deep reinforcement learning-aided autonomous navigation with landmark generators. Front Neurorobot 2023; 17:1200214. [PMID: 37674856 PMCID: PMC10477440 DOI: 10.3389/fnbot.2023.1200214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 08/08/2023] [Indexed: 09/08/2023] Open
Abstract
Mobile robots are playing an increasingly significant role in social life and industrial production, such as searching and rescuing robots, autonomous exploration of sweeping robots, and so on. Improving the accuracy of autonomous navigation of mobile robots is a hot issue to be solved. However, traditional navigation methods are unable to realize crash-free navigation in an environment with dynamic obstacles, more and more scholars are gradually using autonomous navigation based on deep reinforcement learning (DRL) to replace overly conservative traditional methods. But on the other hand, DRL's training time is too long, and the lack of long-term memory easily leads the robot to a dead end, which makes its application in the actual scene more difficult. To shorten training time and prevent mobile robots from getting stuck and spinning around, we design a new robot autonomous navigation framework which combines the traditional global planning and the local planning based on DRL. Therefore, the entire navigation process can be transformed into first using traditional navigation algorithms to find the global path, then searching for several high-value landmarks on the global path, and then using the DRL algorithm to move the mobile robot toward the designated landmarks to complete the final navigation, which makes the robot training difficulty greatly reduced. Furthermore, in order to improve the lack of long-term memory in deep reinforcement learning, we design a feature extraction network containing memory modules to preserve the long-term dependence of input features. Through comparing our methods with traditional navigation methods and reinforcement learning based on end-to-end depth navigation methods, it shows that while the number of dynamic obstacles is large and obstacles are rapidly moving, our proposed method is, on average, 20% better than the second ranked method in navigation efficiency (navigation time and navigation paths' length), 34% better than the second ranked method in safety (collision times), 26.6% higher than the second ranked method in success rate, and shows strong robustness.
Collapse
Affiliation(s)
| | | | | | | | - Jian Xiao
- Department of Integrated Circuit Science and Engineering, Nanjing University of Posts and Telecommunications, Nanjing, China
| |
Collapse
|
14
|
Luo F, Liu Z, Zou F, Liu M, Cheng Y, Li X. Robust Localization of Industrial Park UGV and Prior Map Maintenance. SENSORS (BASEL, SWITZERLAND) 2023; 23:6987. [PMID: 37571770 PMCID: PMC10422659 DOI: 10.3390/s23156987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 07/27/2023] [Accepted: 07/29/2023] [Indexed: 08/13/2023]
Abstract
The precise localization of unmanned ground vehicles (UGVs) in industrial parks without prior GPS measurements presents a significant challenge. Simultaneous localization and mapping (SLAM) techniques can address this challenge by capturing environmental features, using sensors for real-time UGV localization. In order to increase the real-time localization accuracy and efficiency of UGVs, and to improve the robustness of UGVs' odometry within industrial parks-thereby addressing issues related to UGVs' motion control discontinuity and odometry drift-this paper proposes a tightly coupled LiDAR-IMU odometry method based on FAST-LIO2, integrating ground constraints and a novel feature extraction method. Additionally, a novel maintenance method of prior maps is proposed. The front-end module acquires the prior pose of the UGV by combining the detection and correction of relocation with point cloud registration. Then, the proposed maintenance method of prior maps is used to hierarchically and partitionally segregate and perform the real-time maintenance of the prior maps. At the back-end, real-time localization is achieved by the proposed tightly coupled LiDAR-IMU odometry that incorporates ground constraints. Furthermore, a feature extraction method based on the bidirectional-projection plane slope difference filter is proposed, enabling efficient and accurate point cloud feature extraction for edge, planar and ground points. Finally, the proposed method is evaluated, using self-collected datasets from industrial parks and the KITTI dataset. Our experimental results demonstrate that, compared to FAST-LIO2 and FAST-LIO2 with the curvature feature extraction method, the proposed method improved the odometry accuracy by 30.19% and 48.24% on the KITTI dataset. The efficiency of odometry was improved by 56.72% and 40.06%. When leveraging prior maps, the UGV achieved centimeter-level localization accuracy. The localization accuracy of the proposed method was improved by 46.367% compared to FAST-LIO2 on self-collected datasets, and the located efficiency was improved by 32.33%. The z-axis-located accuracy of the proposed method reached millimeter-level accuracy. The proposed prior map maintenance method reduced RAM usage by 64% compared to traditional methods.
Collapse
Affiliation(s)
- Fanrui Luo
- School of Information Science and Engineering, Shenyang University of Technology, Shenyang 110870, China; (F.L.); (Y.C.); (X.L.)
| | - Zhenyu Liu
- School of Information Science and Engineering, Shenyang University of Technology, Shenyang 110870, China; (F.L.); (Y.C.); (X.L.)
| | - Fengshan Zou
- SIASUN Robot & Automation Co., Ltd., Shenyang 110169, China; (F.Z.); (M.L.)
| | - Mingmin Liu
- SIASUN Robot & Automation Co., Ltd., Shenyang 110169, China; (F.Z.); (M.L.)
| | - Yang Cheng
- School of Information Science and Engineering, Shenyang University of Technology, Shenyang 110870, China; (F.L.); (Y.C.); (X.L.)
| | - Xiaoyu Li
- School of Information Science and Engineering, Shenyang University of Technology, Shenyang 110870, China; (F.L.); (Y.C.); (X.L.)
| |
Collapse
|
15
|
Dong T, Zhang Y, Xiao Q, Huang Y. The Control Method of Autonomous Flight Avoidance Barriers of UAVs in Confined Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:5896. [PMID: 37447745 DOI: 10.3390/s23135896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 06/19/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
This paper proposes an improved 3D-Vector Field Histogram (3D-VFH) algorithm for autonomous flight and local obstacle avoidance of multi-rotor unmanned aerial vehicles (UAVs) in a confined environment. Firstly, the method employs a target point coordinate system based on polar coordinates to convert the point cloud data, considering that long-range point cloud information has no effect on local obstacle avoidance by UAVs. This enables UAVs to effectively utilize obstacle information for obstacle avoidance and improves the real-time performance of the algorithm. Secondly, a sliding window algorithm is used to estimate the optimal flight path of the UAV and implement obstacle avoidance control, thereby maintaining the attitude stability of the UAV during obstacle avoidance flight. Finally, experimental analysis is conducted, and the results show that the UAV has good attitude stability during obstacle avoidance flight, can autonomously follow the expected trajectory, and can avoid dynamic obstacles, achieving precise obstacle avoidance.
Collapse
Affiliation(s)
- Tiantian Dong
- School of Electronics and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
- School of Microelectronics, Jiangsu Vocational College of Information Technology, Wuxi 214153, China
| | - Yonghong Zhang
- School of Electronics and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Qianyu Xiao
- School of Applied Technology, Changzhou University, Changzhou 213164, China
| | - Yi Huang
- School of Applied Technology, Changzhou University, Changzhou 213164, China
| |
Collapse
|
16
|
A Novel Method for Fast Generation of 3D Objects from Multiple Depth Sensors. JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH 2023. [DOI: 10.2478/jaiscr-2023-0009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023] Open
Abstract
Abstract
Scanning real 3D objects face many technical challenges. Stationary solutions allow for accurate scanning. However, they usually require special and expensive equipment. Competitive mobile solutions (handheld scanners, LiDARs on vehicles, etc.) do not allow for an accurate and fast mapping of the surface of the scanned object. The article proposes an end-to-end automated solution that enables the use of widely available mobile and stationary scanners. The related system generates a full 3D model of the object based on multiple depth sensors. For this purpose, the scanned object is marked with markers. Markers type and positions are automatically detected and mapped to a template mesh. The reference template is automatically selected for the scanned object, which is then transformed according to the data from the scanners with non-rigid transformation. The solution allows for the fast scanning of complex and varied size objects, constituting a set of training data for segmentation and classification systems of 3D scenes. The main advantage of the proposed solution is its efficiency, which enables real-time scanning and the ability to generate a mesh with a regular structure. It is critical for training data for machine learning algorithms. The source code is available at https://github.com/SATOffice/improved_scanner3D.
Collapse
|
17
|
Trybała P, Szrek J, Dębogórski B, Ziętek B, Blachowski J, Wodecki J, Zimroz R. Analysis of Lidar Actuator System Influence on the Quality of Dense 3D Point Cloud Obtained with SLAM. SENSORS (BASEL, SWITZERLAND) 2023; 23:721. [PMID: 36679518 PMCID: PMC9865594 DOI: 10.3390/s23020721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 01/02/2023] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Mobile mapping technologies, based on techniques such as simultaneous localization and mapping (SLAM) and surface-from-motion (SfM), are being vigorously developed both in the scientific community and in industry. They are crucial concepts for automated 3D surveying and autonomous vehicles. For various applications, rotating multiline scanners, manufactured, for example, by Velodyne and Ouster, are utilized as the main sensor of the mapping hardware system. However, their principle of operation has a substantial drawback, as their scanning pattern creates natural gaps between the scanning lines. In some models, the vertical lidar field of view can also be severely limited. To overcome these issues, more sensors could be employed, which would significantly increase the cost of the mapping system. Instead, some investigators have added a tilting or rotating motor to the lidar. Although the effectiveness of such a solution is usually clearly visible, its impact on the quality of the acquired 3D data has not yet been investigated. This paper presents an adjustable mapping system, which allows for switching between a stable, tilting or fully rotating lidar position. A simple experiment in a building corridor was performed, simulating the conditions of a mobile robot passing through a narrow tunnel: a common setting for applications, such as mining surveying or industrial facility inspection. A SLAM algorithm is utilized to create a coherent 3D point cloud of the mapped corridor for three settings of the sensor movement. The extent of improvement in the 3D data quality when using the tilting and rotating lidar, compared to keeping a stable position, is quantified. Different metrics are proposed to account for different aspects of the 3D data quality, such as completeness, density and geometry coherence. The ability of SLAM algorithms to faithfully represent selected objects appearing in the mapped scene is also examined. The results show that the fully rotating solution is optimal in terms of most of the metrics analyzed. However, the improvement observed from a horizontally mounted sensor to a tilting sensor was the most significant.
Collapse
Affiliation(s)
- Paweł Trybała
- Faculty of Geoengineering, Mining and Geology, Wrocław University of Science and Technology, Na Grobli 15, 50-421 Wroclaw, Poland
| | - Jarosław Szrek
- Faculty of Mechanical Engineering, Wroclaw University of Science and Technology, Łukasiewicza 5, 50-371 Wroclaw, Poland
| | - Błażej Dębogórski
- Faculty of Geoengineering, Mining and Geology, Wrocław University of Science and Technology, Na Grobli 15, 50-421 Wroclaw, Poland
| | - Bartłomiej Ziętek
- Faculty of Geoengineering, Mining and Geology, Wrocław University of Science and Technology, Na Grobli 15, 50-421 Wroclaw, Poland
| | - Jan Blachowski
- Faculty of Geoengineering, Mining and Geology, Wrocław University of Science and Technology, Na Grobli 15, 50-421 Wroclaw, Poland
| | - Jacek Wodecki
- Faculty of Geoengineering, Mining and Geology, Wrocław University of Science and Technology, Na Grobli 15, 50-421 Wroclaw, Poland
| | - Radosław Zimroz
- Faculty of Geoengineering, Mining and Geology, Wrocław University of Science and Technology, Na Grobli 15, 50-421 Wroclaw, Poland
| |
Collapse
|
18
|
Lewis J, Lima PU, Basiri M. Collaborative 3D Scene Reconstruction in Large Outdoor Environments Using a Fleet of Mobile Ground Robots. SENSORS (BASEL, SWITZERLAND) 2022; 23:375. [PMID: 36616973 PMCID: PMC9824876 DOI: 10.3390/s23010375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 06/17/2023]
Abstract
Teams of mobile robots can be employed in many outdoor applications, such as precision agriculture, search and rescue, and industrial inspection, allowing an efficient and robust exploration of large areas and enhancing the operators' situational awareness. In this context, this paper describes an active and decentralized framework for the collaborative 3D mapping of large outdoor areas using a team of mobile ground robots under limited communication range and bandwidth. A real-time method is proposed that allows the sharing and registration of individual local maps, obtained from 3D LiDAR measurements, to build a global representation of the environment. A conditional peer-to-peer communication strategy is used to share information over long-range and short-range distances while considering the bandwidth constraints. Results from both real-world and simulated experiments, executed in an actual solar power plant and in its digital twin representation, demonstrate the reliability and efficiency of the proposed decentralized framework for such large outdoor operations.
Collapse
|
19
|
Abdou M, Kamal HA. SDC-Net: End-to-End Multitask Self-Driving Car Camera Cocoon IoT-Based System. SENSORS (BASEL, SWITZERLAND) 2022; 22:9108. [PMID: 36501817 PMCID: PMC9739968 DOI: 10.3390/s22239108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 11/14/2022] [Accepted: 11/17/2022] [Indexed: 06/17/2023]
Abstract
Currently, deep learning and IoT collaboration is heavily invading automotive applications especially in autonomous driving throughout successful assistance functionalities. Crash avoidance, path planning, and automatic emergency braking are essential functionalities for autonomous driving. Trigger-action-based IoT platforms are widely used due to its simplicity and ability of doing receptive tasks accurately. In this work, we propose SDC-Net system: an end-to-end deep learning IoT hybrid system in which a multitask neural network is trained based on different input representations from a camera-cocoon setup installed in CARLA simulator. We build our benchmark dataset covering different scenarios and corner cases that the vehicle may expose in order to navigate safely and robustly while testing. The proposed system aims to output relevant control actions for crash avoidance, path planning and automatic emergency braking. Multitask learning with a bird's eye view input representation outperforms the nearest representation in precision, recall, f1-score, accuracy, and average MSE by more than 11.62%, 9.43%, 10.53%, 6%, and 25.84%, respectively.
Collapse
Affiliation(s)
| | - Hanan Ahmed Kamal
- Department of Electronics and Communications Engineering, Faculty of Engineering, Cairo University, Giza 12613, Egypt
| |
Collapse
|
20
|
Zhao Z, Zhang Y, Shi J, Long L, Lu Z. Robust Lidar-Inertial Odometry with Ground Condition Perception and Optimization Algorithm for UGV. SENSORS (BASEL, SWITZERLAND) 2022; 22:7424. [PMID: 36236522 PMCID: PMC9572049 DOI: 10.3390/s22197424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 09/23/2022] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
Unmanned ground vehicles (UGVs) are making more and more progress in many application scenarios in recent years, such as exploring unknown wild terrain, working in precision agriculture and serving in emergency rescue. Due to the complex ground conditions and changeable surroundings of these unstructured environments, it is challenging for these UGVs to obtain robust and accurate state estimations by using sensor fusion odometry without prior perception and optimization for specific scenarios. In this paper, based on an error-state Kalman filter (ESKF) fusion model, we propose a robust lidar-inertial odometry with a novel ground condition perception and optimization algorithm specifically designed for UGVs. The probability distribution gained from the raw inertial measurement unit (IMU) measurements during a certain time period and the state estimation of ESKF were both utilized to evaluate the flatness of ground conditions in real-time; then, by analyzing the relationship between the current ground condition and the accuracy of the state estimation, the tightly coupled lidar-inertial odometry was dynamically optimized further by adjusting the related parameters of the processing algorithm of the lidar points to obtain robust and accurate ego-motion state estimations of UGVs. The method was validated in various types of environments with changeable ground conditions, and the robustness and accuracy are shown through the consistent accurate state estimation in different ground conditions compared with the state-of-art lidar-inertial odometry systems.
Collapse
Affiliation(s)
- Zixu Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yucheng Zhang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Jinglin Shi
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Long Long
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Zaiwang Lu
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
21
|
Căilean AM, Beguni C, Avătămăniței SA, Dimian M, Popa V. Design, Implementation and Experimental Investigation of a Pedestrian Street Crossing Assistance System Based on Visible Light Communications. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22155481. [PMID: 35897984 PMCID: PMC9331235 DOI: 10.3390/s22155481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/15/2022] [Accepted: 07/20/2022] [Indexed: 05/14/2023]
Abstract
In urban areas, pedestrians are the road users category that is the most exposed to road accident fatalities. In this context, the present article proposes a totally new architecture, which aims to increase the safety of pedestrians on the crosswalk. The first component of the design is a pedestrian detection system, which identifies the user's presence in the region of the crosswalk and determines the future street crossing action possibility or the presence of a pedestrian engaged in street crossing. The second component of the system is the visible light communications part, which is used to transmit this information toward the approaching vehicles. The proposed architecture has been implemented at a regular scale and experimentally evaluated in outdoor conditions. The experimental results showed a 100% overall pedestrian detection rate. On the other hand, the VLC system showed a communication distance between 5 and 40 m when using a standard LED light crosswalk sign as a VLC emitter, while maintaining a bit error ratio between 10-7 and 10-5. These results demonstrate the fact that the VLC technology is now able to be used in real applications, making the transition from a high potential technology to a confirmed technology. As far as we know, this is the first article presenting such a pedestrian street crossing assistance system.
Collapse
Affiliation(s)
- Alin-Mihai Căilean
- Integrated Center for Research, Development and Innovation in Advanced Materials, Nanotechnologies, and Distributed Systems for Fabrication and Control, Stefan cel Mare University of Suceava, 720229 Suceava, Romania; (C.B.); (S.-A.A.); (M.D.)
- Department of Computers, Electronics and Automation, Stefan cel Mare University of Suceava, 720229 Suceava, Romania;
- Laboratoire D’ingénierie des Systèmes de Versailles (LISV), Paris-Saclay University, 78140 Velizy-Villacoublay, France
- Correspondence:
| | - Cătălin Beguni
- Integrated Center for Research, Development and Innovation in Advanced Materials, Nanotechnologies, and Distributed Systems for Fabrication and Control, Stefan cel Mare University of Suceava, 720229 Suceava, Romania; (C.B.); (S.-A.A.); (M.D.)
- Department of Computers, Electronics and Automation, Stefan cel Mare University of Suceava, 720229 Suceava, Romania;
| | - Sebastian-Andrei Avătămăniței
- Integrated Center for Research, Development and Innovation in Advanced Materials, Nanotechnologies, and Distributed Systems for Fabrication and Control, Stefan cel Mare University of Suceava, 720229 Suceava, Romania; (C.B.); (S.-A.A.); (M.D.)
- Department of Computers, Electronics and Automation, Stefan cel Mare University of Suceava, 720229 Suceava, Romania;
| | - Mihai Dimian
- Integrated Center for Research, Development and Innovation in Advanced Materials, Nanotechnologies, and Distributed Systems for Fabrication and Control, Stefan cel Mare University of Suceava, 720229 Suceava, Romania; (C.B.); (S.-A.A.); (M.D.)
- Department of Computers, Electronics and Automation, Stefan cel Mare University of Suceava, 720229 Suceava, Romania;
| | - Valentin Popa
- Department of Computers, Electronics and Automation, Stefan cel Mare University of Suceava, 720229 Suceava, Romania;
| |
Collapse
|