1
|
Towards a Multi-Perspective Time of Flight Laser Ranging Device Based on Mirrors and Prisms. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12147121] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
This paper investigates the feasibility of redirecting the field of view (FOV) of a light-based time-of-flight (ToF) ranging device, commonly known as a pulsed lidar, using fixed mirrors and prisms for possible future use in robotics. The emphasis is on configurations where the FOV redirection element is positioned beyond the ranging device’s dead zone. A custom made direct ToF ranging device with time-over-threshold (TOT)-based walk error compensation was used to evaluate the effects of the FOV redirecting optics on range measurement accuracy and precision. The tests include redirecting the FOV with a clean prism with anti-reflective (AR) coating on its legs, as well as with a regular and a first surface mirror in both a clean and dusted state. The study finds the prism to be unsuitable due to parasitic reflections, which ruin the ranging data. The clean mirrors were found to have no noticeable effect on ranging accuracy. When they are dusty, mirrors introduce a negative measurement error. This effect is the most pronounced when a mirror is positioned toward the end of the partial dead zone of the ToF rangefinder, but loses influence as the mirror is moved farther away. The error is attributed to the parasitic reflection off dust on the mirror, which reduces the time of detection of the pulse reflected off the real target, and interferes with the walk error compensation by widening the detected pulse.
Collapse
|
2
|
Wang H, Zhang L, Kong Q, Zhu W, Zheng J, Zhuang L, Xu X. Motion planning in complex urban environments: An industrial application on autonomous last‐mile delivery vehicles. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Haiming Wang
- Autonomous Driving Division JD.com American Technologies Corporation Mountain View California USA
| | - Liangliang Zhang
- Autonomous Driving Division JD.com American Technologies Corporation Mountain View California USA
| | - Qi Kong
- Autonomous Driving Division JD.com American Technologies Corporation Mountain View California USA
| | - Weicheng Zhu
- Autonomous Driving Division JD.com American Technologies Corporation Mountain View California USA
| | - Jie Zheng
- Autonomous Driving Division JD.com Beijing China
| | - Li Zhuang
- Autonomous Driving Division JD.com American Technologies Corporation Mountain View California USA
| | - Xin Xu
- Autonomous Driving Division JD.com Beijing China
| |
Collapse
|
3
|
Automatic Labeling to Generate Training Data for Online LiDAR-Based Moving Object Segmentation. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3166544] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
4
|
Zurn J, Burgard W. Self-Supervised Moving Vehicle Detection From Audio-Visual Cues. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3183931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Jannik Zurn
- Department of Computer Science, University of Freiburg, Freiburg im Breisgau, Germany
| | - Wolfram Burgard
- Department of Computer Science and Engineering, University of Nuremberg, Erlangen, Germany
| |
Collapse
|
5
|
Reyes-Muñoz A, Guerrero-Ibáñez J. Vulnerable Road Users and Connected Autonomous Vehicles Interaction: A Survey. SENSORS 2022; 22:s22124614. [PMID: 35746397 PMCID: PMC9229412 DOI: 10.3390/s22124614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/14/2022] [Accepted: 06/15/2022] [Indexed: 11/16/2022]
Abstract
There is a group of users within the vehicular traffic ecosystem known as Vulnerable Road Users (VRUs). VRUs include pedestrians, cyclists, motorcyclists, among others. On the other hand, connected autonomous vehicles (CAVs) are a set of technologies that combines, on the one hand, communication technologies to stay always ubiquitous connected, and on the other hand, automated technologies to assist or replace the human driver during the driving process. Autonomous vehicles are being visualized as a viable alternative to solve road accidents providing a general safe environment for all the users on the road specifically to the most vulnerable. One of the problems facing autonomous vehicles is to generate mechanisms that facilitate their integration not only within the mobility environment, but also into the road society in a safe and efficient way. In this paper, we analyze and discuss how this integration can take place, reviewing the work that has been developed in recent years in each of the stages of the vehicle-human interaction, analyzing the challenges of vulnerable users and proposing solutions that contribute to solving these challenges.
Collapse
Affiliation(s)
- Angélica Reyes-Muñoz
- Computer Architecture Department, Polytechnic University of Catalonia, 08860 Barcelona, Spain
- Correspondence:
| | | |
Collapse
|
6
|
Sorokin M, Tan J, Liu CK, Ha S. Learning to Navigate Sidewalks in Outdoor Environments. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3145947] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
7
|
Lluvia I, Lazkano E, Ansuategi A. Active Mapping and Robot Exploration: A Survey. SENSORS (BASEL, SWITZERLAND) 2021; 21:2445. [PMID: 33918107 PMCID: PMC8037480 DOI: 10.3390/s21072445] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 03/21/2021] [Accepted: 03/28/2021] [Indexed: 11/16/2022]
Abstract
Simultaneous localization and mapping responds to the problem of building a map of the environment without any prior information and based on the data obtained from one or more sensors. In most situations, the robot is driven by a human operator, but some systems are capable of navigating autonomously while mapping, which is called native simultaneous localization and mapping. This strategy focuses on actively calculating the trajectories to explore the environment while building a map with a minimum error. In this paper, a comprehensive review of the research work developed in this field is provided, targeting the most relevant contributions in indoor mobile robotics.
Collapse
Affiliation(s)
- Iker Lluvia
- Autonomous and Intelligent Systems Unit, Fundación Tekniker, 20600 Eibar, Gipuzkoa, Spain;
| | - Elena Lazkano
- Robotics and Autonomous Systems Group (RSAIT), Computer Science and Artificial Intelligence Department, Faculty of Informatics, University of the Basque Country (UPV/EHU), 20018 Donostia, Gipuzkoa, Spain;
| | - Ander Ansuategi
- Autonomous and Intelligent Systems Unit, Fundación Tekniker, 20600 Eibar, Gipuzkoa, Spain;
| |
Collapse
|
8
|
Zurn J, Burgard W, Valada A. Self-Supervised Visual Terrain Classification From Unsupervised Acoustic Feature Learning. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3031214] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
9
|
Liu T, Luo W, Ma L, Huang JJ, Stathaki T, Dai T. Coupled Network for Robust Pedestrian Detection With Gated Multi-Layer Feature Extraction and Deformable Occlusion Handling. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:754-766. [PMID: 33237856 DOI: 10.1109/tip.2020.3038371] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Pedestrian detection methods have been significantly improved with the development of deep convolutional neural networks. Nevertheless, detecting ismall-scaled pedestrians and occluded pedestrians remains a challenging problem. In this paper, we propose a pedestrian detection method with a couple-network to simultaneously address these two issues. One of the sub-networks, the gated multi-layer feature extraction sub-network, aims to adaptively generate discriminative features for pedestrian candidates in order to robustly detect pedestrians with large variations on scale. The second sub-network targets on handling the occlusion problem of pedestrian detection by using deformable regional region of interest (RoI)-pooling. We investigate two different gate units for the gated sub-network, namely, the channel-wise gate unit and the spatio-wise gate unit, which can enhance the representation ability of the regional convolutional features among the channel dimensions or across the spatial domain, repetitively. Ablation studies have validated the effectiveness of both the proposed gated multi-layer feature extraction sub-network and the deformable occlusion handling sub-network. With the coupled framework, our proposed pedestrian detector achieves promising results on both two pedestrian datasets, especially on detecting small or occluded pedestrians. On the CityPersons dataset, the proposed detector achieves the lowest missing rates (i.e. 40.78% and 34.60%) on detecting small and occluded pedestrians, surpassing the second best comparison method by 6.0% and 5.87%, respectively.
Collapse
|
10
|
Radwan N, Burgard W, Valada A. Multimodal interaction-aware motion prediction for autonomous street crossing. Int J Rob Res 2020. [DOI: 10.1177/0278364920961809] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
For mobile robots navigating on sidewalks, the ability to safely cross street intersections is essential. Most existing approaches rely on the recognition of the traffic light signal to make an informed crossing decision. Although these approaches have been crucial enablers for urban navigation, the capabilities of robots employing such approaches are still limited to navigating only on streets that contain signalized intersections. In this article, we address this challenge and propose a multimodal convolutional neural network framework to predict the safety of a street intersection for crossing. Our architecture consists of two subnetworks: an interaction-aware trajectory estimation stream ( interaction-aware temporal convolutional neural network (IA-TCNN)), that predicts the future states of all observed traffic participants in the scene; and a traffic light recognition stream AtteNet. Our IA-TCNN utilizes dilated causal convolutions to model the behavior of all the observable dynamic agents in the scene without explicitly assigning priorities to the interactions among them, whereas AtteNet utilizes squeeze-excitation blocks to learn a content-aware mechanism for selecting the relevant features from the data, thereby improving the noise robustness. Learned representations from the traffic light recognition stream are fused with the estimated trajectories from the motion prediction stream to learn the crossing decision. Incorporating the uncertainty information from both modules enables our architecture to learn a likelihood function that is robust to noise and mispredictions from either subnetworks. Simultaneously, by learning to estimate motion trajectories of the surrounding traffic participants and incorporating knowledge of the traffic light signal, our network learns a robust crossing procedure that is invariant to the type of street intersection. Furthermore, we extend our previously introduced Freiburg Street Crossing dataset with sequences captured at multiple intersections of varying types, demonstrating complex interactions among the traffic participants as well as various lighting and weather conditions. We perform comprehensive experimental evaluations on public datasets as well as our Freiburg Street Crossing dataset, which demonstrate that our network achieves state-of-the-art performance for each of the subtasks, as well as for the crossing safety prediction. Moreover, we deploy the proposed architectural framework on a robotic platform and conduct real-world experiments that demonstrate the suitability of the approach for real-time deployment and robustness to various environments.
Collapse
Affiliation(s)
- Noha Radwan
- Department of Computer Science, University of Freiburg, Germany
| | - Wolfram Burgard
- Department of Computer Science, University of Freiburg, Germany
| | - Abhinav Valada
- Department of Computer Science, University of Freiburg, Germany
| |
Collapse
|
11
|
Farid A, Matsumaru T. Path Planning in Outdoor Pedestrian Settings Using 2D Digital Maps. JOURNAL OF ROBOTICS AND MECHATRONICS 2019. [DOI: 10.20965/jrm.2019.p0464] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this article, a framework for planning sidewalk-wise paths in data-limited pedestrian environments is presented by visually recognizing city blocks in 2D digital maps (e.g., Google Maps, and OpenStreet Maps) using contour detection, and by then applying graph theory to infer a pedestrian path from start to finish. Two main problems have been identified; first, several locations worldwide (e.g., suburban / rural areas) lack recorded data on street crossings and pedestrian walkways. Second, the continuous process of recording maps (i.e., digital cartography) is, to our current knowledge, manual and has not yet been fully automated in practice. Both issues contribute toward a scaling problem, in which the continuous monitoring and recording of such data at a global scale becomes time and effort consuming. As a result, the purpose of this framework is to produce path plans that do not depend on pre-recorded (e.g., using simultaneous localization and mapping (SLAM)) or data-rich pedestrian maps, thus facilitating navigation for mobile robots and people with visual impairment. Assuming that all roads are crossable, the framework was able to produce pedestrian paths for most locations where data on sidewalks and street crossings were indeed limited at 75% accuracy in our test-set, but certain challenges still remain to attain higher accuracy and to match real-world settings. Additionally, we describe certain works in the literature that describe how to utilize such path plans effectively.
Collapse
|
12
|
Robot–City Interaction: Mapping the Research Landscape—A Survey of the Interactions Between Robots and Modern Cities. Int J Soc Robot 2019. [DOI: 10.1007/s12369-019-00534-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Luo Y, Cai P, Bera A, Hsu D, Lee WS, Manocha D. PORCA: Modeling and Planning for Autonomous Driving Among Many Pedestrians. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2852793] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
14
|
Radwan N, Valada A, Burgard W. VLocNet++: Deep Multitask Learning for Semantic Visual Localization and Odometry. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2869640] [Citation(s) in RCA: 104] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
15
|
Hosoda Y, Sawahashi R, Machinaka N, Yamazaki R, Sadakuni Y, Onda K, Kusakari R, Kimba M, Oishi T, Kuroda Y. Robust Road-Following Navigation System with a Simple Map. JOURNAL OF ROBOTICS AND MECHATRONICS 2018. [DOI: 10.20965/jrm.2018.p0552] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a novel autonomous navigation system. Our proposed system is based on a simple map (an Edge-Node Graph, which is created from an electronic map). This system consists of “Localization,” which estimates which edge is on the Edge-Node Graph, “Environmental Recognition,” which recognizes the environment around the robot, and “Path Planning,” which avoids objects. Since the robot travels using the Edge-Node Graph, there is no need to prepare an environmental map in advance. In addition, the system is quite robust, since it relies less on prior information. To show the effectiveness of our system, we conducted experiments on each elemental technology as well as some traveling tests.
Collapse
|
16
|
Sujiwo A, Takeuchi E, Morales LY, Akai N, Darweesh H, Ninomiya Y, Edahiro M. Robust and Accurate Monocular Vision-Based Localization in Outdoor Environments of Real-World Robot Challenge. JOURNAL OF ROBOTICS AND MECHATRONICS 2017. [DOI: 10.20965/jrm.2017.p0685] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper describes our approach to perform robust monocular camera metric localization in the dynamic environments of Tsukuba Challenge 2016. We address two issues related to vision-based navigation. First, we improved the coverage by building a custom vocabulary out of the scene and improving upon place recognition routine which is key for global localization. Second, we established possibility of lifelong localization by using previous year’s map. Experimental results show that localization coverage was higher than 90% for six different data sets taken in different years, while localization average errors were under 0.2 m. Finally, the average of coverage for data sets tested with maps taken in different years was of 75%.
Collapse
|
17
|
Darweesh H, Takeuchi E, Takeda K, Ninomiya Y, Sujiwo A, Morales LY, Akai N, Tomizawa T, Kato S. Open Source Integrated Planner for Autonomous Navigation in Highly Dynamic Environments. JOURNAL OF ROBOTICS AND MECHATRONICS 2017. [DOI: 10.20965/jrm.2017.p0668] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Planning is one of the cornerstones of autonomous robot navigation. In this paper we introduce an open source planner called “OpenPlanner” for mobile robot navigation, composed of a global path planner, a behavior state generator and a local planner. OpenPlanner requires a map and a goal position to compute a global path and execute it while avoiding obstacles. It can also trigger behaviors, such as stopping at traffic lights. The global planner generates smooth, global paths to be used as a reference, after considering traffic costs annotated in the map. The local planner generates smooth, obstacle-free local trajectories which are used by a trajectory tracker to achieve low level control. The behavior state generator handles situations such as path tracking, object following, obstacle avoidance, emergency stopping, stopping at stop signs and traffic light negotiation. OpenPlanner is evaluated in simulation and field experimentation using a non-holonomic Ackerman steering-based mobile robot. Results from simulation and field experimentation indicate that OpenPlanner can generate global and local paths dynamically, navigate smoothly through a highly dynamic environments and operate reliably in real time. OpenPlanner has been implemented in the Autoware open source autonomous driving framework’s Robot Operating System (ROS).
Collapse
|
18
|
Aotani Y, Ienaga T, Machinaka N, Sadakuni Y, Yamazaki R, Hosoda Y, Sawahashi R, Kuroda Y. Development of Autonomous Navigation System Using 3D Map with Geometric and Semantic Information. JOURNAL OF ROBOTICS AND MECHATRONICS 2017. [DOI: 10.20965/jrm.2017.p0639] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents an autonomous navigation system. Our system is based on an accurate 3D map, which includes “geometric information” (e.g., curb, wall, street tree) and “semantic information” (e.g., sidewalk, roadway, crosswalk) extracted by environmental recognition. By using the semantic map, we can obtain the suitable area to keep away from undesired places. Furthermore, by comparing the map with real-time 3D geometric information from LIDAR, we obtain the robot position. To show the effectiveness of our system, we conduct a 3D semantic map construction experiment and driving test. The experiment results show that the proposed system enables accurate and highly reproducible localization and stable autonomous mobility.
Collapse
|
19
|
Speck D, Dornhege C, Burgard W. Shakey 2016 - How Much Does it Take to Redo Shakey the Robot? IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2665694] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
20
|
Sprunk C, Lau B, Pfaff P, Burgard W. An accurate and efficient navigation system for omnidirectional robots in industrial environments. Auton Robots 2016. [DOI: 10.1007/s10514-016-9557-1] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|