1
|
Ziraldo E, Govers ME, Oliver M. Enhancing Autonomous Vehicle Decision-Making at Intersections in Mixed-Autonomy Traffic: A Comparative Study Using an Explainable Classifier. SENSORS (BASEL, SWITZERLAND) 2024; 24:3859. [PMID: 38931644 PMCID: PMC11207970 DOI: 10.3390/s24123859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 06/07/2024] [Accepted: 06/13/2024] [Indexed: 06/28/2024]
Abstract
The transition to fully autonomous roadways will include a long period of mixed-autonomy traffic. Mixed-autonomy roadways pose a challenge for autonomous vehicles (AVs) which use conservative driving behaviours to safely negotiate complex scenarios. This can lead to congestion and collisions with human drivers who are accustomed to more confident driving styles. In this work, an explainable multi-variate time series classifier, Time Series Forest (TSF), is compared to two state-of-the-art models in a priority-taking classification task. Responses to left-turning hazards at signalized and stop-sign-controlled intersections were collected using a full-vehicle driving simulator. The dataset was comprised of a combination of AV sensor-collected and V2V (vehicle-to-vehicle) transmitted features. Each scenario forced participants to either take ("go") or yield ("no go") priority at the intersection. TSF performed comparably for both the signalized and sign-controlled datasets, although all classifiers performed better on the signalized dataset. The inclusion of V2V data led to a slight increase in accuracy for all models and a substantial increase in the true positive rate of the stop-sign-controlled models. Additionally, incorporating the V2V data resulted in fewer chosen features, thereby decreasing the model complexity while maintaining accuracy. Including the selected features in an AV planning model is hypothesized to reduce the need for conservative AV driving behaviour without increasing the risk of collision.
Collapse
Affiliation(s)
| | | | - Michele Oliver
- School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada; (E.Z.); (M.E.G.)
| |
Collapse
|
2
|
Kadav P, Sharma S, Fanas Rojas J, Patil P, Wang C(R, Ekti AR, Meyer RT, Asher ZD. Automated Lane Centering: An Off-the-Shelf Computer Vision Product vs. Infrastructure-Based Chip-Enabled Raised Pavement Markers. SENSORS (BASEL, SWITZERLAND) 2024; 24:2327. [PMID: 38610538 PMCID: PMC11014404 DOI: 10.3390/s24072327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 03/29/2024] [Accepted: 04/03/2024] [Indexed: 04/14/2024]
Abstract
Safe autonomous vehicle (AV) operations depend on an accurate perception of the driving environment, which necessitates the use of a variety of sensors. Computational algorithms must then process all of this sensor data, which typically results in a high on-vehicle computational load. For example, existing lane markings are designed for human drivers, can fade over time, and can be contradictory in construction zones, which require specialized sensing and computational processing in an AV. But, this standard process can be avoided if the lane information is simply transmitted directly to the AV. High definition maps and road side units (RSUs) can be used for direct data transmission to the AV, but can be prohibitively expensive to establish and maintain. Additionally, to ensure robust and safe AV operations, more redundancy is beneficial. A cost-effective and passive solution is essential to address this need effectively. In this research, we propose a new infrastructure information source (IIS), chip-enabled raised pavement markers (CERPMs), which provide environmental data to the AV while also decreasing the AV compute load and the associated increase in vehicle energy use. CERPMs are installed in place of traditional ubiquitous raised pavement markers along road lane lines to transmit geospatial information along with the speed limit using long range wide area network (LoRaWAN) protocol directly to nearby vehicles. This information is then compared to the Mobileye commercial off-the-shelf traditional system that uses computer vision processing of lane markings. Our perception subsystem processes the raw data from both CEPRMs and Mobileye to generate a viable path required for a lane centering (LC) application. To evaluate the detection performance of both systems, we consider three test routes with varying conditions. Our results show that the Mobileye system failed to detect lane markings when the road curvature exceeded ±0.016 m-1. For the steep curvature test scenario, it could only detect lane markings on both sides of the road for just 6.7% of the given test route. On the other hand, the CERPMs transmit the programmed geospatial information to the perception subsystem on the vehicle to generate a reference trajectory required for vehicle control. The CERPMs successfully generated the reference trajectory for vehicle control in all test scenarios. Moreover, the CERPMs can be detected up to 340 m from the vehicle's position. Our overall conclusion is that CERPM technology is viable and that it has the potential to address the operational robustness and energy efficiency concerns plaguing the current generation of AVs.
Collapse
Affiliation(s)
- Parth Kadav
- Department of Mechanical and Aerospace Engineering, Western Michigan University, 4601 Campus Dr, Kalamazoo, MI 49008, USA; (P.K.); (S.S.); (P.P.); (R.T.M.)
| | - Sachin Sharma
- Department of Mechanical and Aerospace Engineering, Western Michigan University, 4601 Campus Dr, Kalamazoo, MI 49008, USA; (P.K.); (S.S.); (P.P.); (R.T.M.)
| | - Johan Fanas Rojas
- Revision Autonomy Inc., 4717 Campus Drive, Kalamazoo, MI 49008, USA;
| | - Pritesh Patil
- Department of Mechanical and Aerospace Engineering, Western Michigan University, 4601 Campus Dr, Kalamazoo, MI 49008, USA; (P.K.); (S.S.); (P.P.); (R.T.M.)
| | | | - Ali Riza Ekti
- Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA (A.R.E.)
| | - Richard T. Meyer
- Department of Mechanical and Aerospace Engineering, Western Michigan University, 4601 Campus Dr, Kalamazoo, MI 49008, USA; (P.K.); (S.S.); (P.P.); (R.T.M.)
| | - Zachary D. Asher
- Department of Mechanical and Aerospace Engineering, Western Michigan University, 4601 Campus Dr, Kalamazoo, MI 49008, USA; (P.K.); (S.S.); (P.P.); (R.T.M.)
- Revision Autonomy Inc., 4717 Campus Drive, Kalamazoo, MI 49008, USA;
| |
Collapse
|
3
|
Contreras M, Jain A, Bhatt NP, Banerjee A, Hashemi E. A survey on 3D object detection in real time for autonomous driving. Front Robot AI 2024; 11:1212070. [PMID: 38510560 PMCID: PMC10950960 DOI: 10.3389/frobt.2024.1212070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 02/19/2024] [Indexed: 03/22/2024] Open
Abstract
This survey reviews advances in 3D object detection approaches for autonomous driving. A brief introduction to 2D object detection is first discussed and drawbacks of the existing methodologies are identified for highly dynamic environments. Subsequently, this paper reviews the state-of-the-art 3D object detection techniques that utilizes monocular and stereo vision for reliable detection in urban settings. Based on depth inference basis, learning schemes, and internal representation, this work presents a method taxonomy of three classes: model-based and geometrically constrained approaches, end-to-end learning methodologies, and hybrid methods. There is highlighted segment for current trend of multi-view detectors as end-to-end methods due to their boosted robustness. Detectors from the last two kinds were specially selected to exploit the autonomous driving context in terms of geometry, scene content and instances distribution. To prove the effectiveness of each method, 3D object detection datasets for autonomous vehicles are described with their unique features, e. g., varying weather conditions, multi-modality, multi camera perspective and their respective metrics associated to different difficulty categories. In addition, we included multi-modal visual datasets, i. e., V2X that may tackle the problems of single-view occlusion. Finally, the current research trends in object detection are summarized, followed by a discussion on possible scope for future research in this domain.
Collapse
Affiliation(s)
| | - Aayush Jain
- Indian Institute of Technology Kharagpur, Kharagpur, West Bengal, India
| | | | | | | |
Collapse
|
4
|
Hannoun S. Editorial for "A Survey of Publicly Available MRI Datasets for Potential Use in Artificial Intelligence Research". J Magn Reson Imaging 2024; 59:481-482. [PMID: 37889102 DOI: 10.1002/jmri.29100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 09/14/2023] [Indexed: 10/28/2023] Open
Affiliation(s)
- Salem Hannoun
- Medical Imaging Sciences Program, Division of Health Professions, Faculty of Health Sciences, American University of Beirut, Beirut, Lebanon
| |
Collapse
|
5
|
Aldibaja M, Yanase R, Suganuma N. Waypoint Transfer Module between Autonomous Driving Maps Based on LiDAR Directional Sub-Images. SENSORS (BASEL, SWITZERLAND) 2024; 24:875. [PMID: 38339592 PMCID: PMC10857431 DOI: 10.3390/s24030875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 01/15/2024] [Accepted: 01/24/2024] [Indexed: 02/12/2024]
Abstract
Lane graphs are very important for describing road semantics and enabling safe autonomous maneuvers using the localization and path-planning modules. These graphs are considered long-life details because of the rare changes occurring in road structures. On the other hand, the global position of the corresponding topological maps might be changed due to the necessity of updating or extending the maps using different positioning systems such as GNSS/INS-RTK (GIR), Dead-Reckoning (DR), or SLAM technologies. Therefore, the lane graphs should be transferred between maps accurately to describe the same semantics of lanes and landmarks. This paper proposes a unique transfer framework in the image domain based on the LiDAR intensity road surfaces, considering the challenging requirements of its implementation in critical road structures. The road surfaces in a target map are decomposed into directional sub-images with X, Y, and Yaw IDs in the global coordinate system. The XY IDs are used to detect the common areas with a reference map, whereas the Yaw IDs are utilized to reconstruct the vehicle trajectory in the reference map and determine the associated lane graphs. The directional sub-images are then matched to the reference sub-images, and the graphs are safely transferred accordingly. The experimental results have verified the robustness and reliability of the proposed framework to transfer lane graphs safely and accurately between maps, regardless of the complexity of road structures, driving scenarios, map generation methods, and map global accuracies.
Collapse
Affiliation(s)
- Mohammad Aldibaja
- The Advanced Mobility Research Institute, Kanazawa University, Kanazawa 920-1192, Japan; (R.Y.); (N.S.)
| | | | | |
Collapse
|
6
|
Shi J, Li K, Piao C, Gao J, Chen L. Model-Based Predictive Control and Reinforcement Learning for Planning Vehicle-Parking Trajectories for Vertical Parking Spaces. SENSORS (BASEL, SWITZERLAND) 2023; 23:7124. [PMID: 37631658 PMCID: PMC10458430 DOI: 10.3390/s23167124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 07/28/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023]
Abstract
This paper proposes a vehicle-parking trajectory planning method that addresses the issues of a long trajectory planning time and difficult training convergence during automatic parking. The process involves two stages: finding a parking space and parking planning. The first stage uses model predictive control (MPC) for trajectory tracking from the initial position of the vehicle to the starting point of the parking operation. The second stage employs the proximal policy optimization (PPO) algorithm to transform the parking behavior into a reinforcement learning process. A four-dimensional reward function is set to evaluate the strategy based on a formal reward, guiding the adjustment of neural network parameters and reducing the exploration of invalid actions. Finally, a simulation environment is built for the parking scene, and a network framework is designed. The proposed method is compared with the deep deterministic policy gradient and double-delay deep deterministic policy gradient algorithms in the same scene. Results confirm that the MPC controller accurately performs trajectory-tracking control with minimal steering wheel angle changes and smooth, continuous movement. The PPO-based reinforcement learning method achieves shorter learning times, totaling only 30% and 37.5% of the deep deterministic policy gradient (DDPG) and twin-delayed deep deterministic policy gradient (TD3), and the number of iterations to reach convergence for the PPO algorithm with the introduction of the four-dimensional evaluation metrics is 75% and 68% shorter compared to the DDPG and TD3 algorithms, respectively. This study demonstrates the effectiveness of the proposed method in addressing a slow convergence and long training times in parking trajectory planning, improving parking timeliness.
Collapse
Affiliation(s)
- Junren Shi
- School of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (K.L.); (C.P.); (L.C.)
| | - Kexin Li
- School of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (K.L.); (C.P.); (L.C.)
| | - Changhao Piao
- School of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (K.L.); (C.P.); (L.C.)
| | - Jun Gao
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China;
| | - Lizhi Chen
- School of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (K.L.); (C.P.); (L.C.)
| |
Collapse
|
7
|
Neves FS, Claro RM, Pinto AM. End-to-End Detection of a Landing Platform for Offshore UAVs Based on a Multimodal Early Fusion Approach. SENSORS (BASEL, SWITZERLAND) 2023; 23:2434. [PMID: 36904644 PMCID: PMC10006912 DOI: 10.3390/s23052434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 02/19/2023] [Accepted: 02/20/2023] [Indexed: 06/18/2023]
Abstract
A perception module is a vital component of a modern robotic system. Vision, radar, thermal, and LiDAR are the most common choices of sensors for environmental awareness. Relying on singular sources of information is prone to be affected by specific environmental conditions (e.g., visual cameras are affected by glary or dark environments). Thus, relying on different sensors is an essential step to introduce robustness against various environmental conditions. Hence, a perception system with sensor fusion capabilities produces the desired redundant and reliable awareness critical for real-world systems. This paper proposes a novel early fusion module that is reliable against individual cases of sensor failure when detecting an offshore maritime platform for UAV landing. The model explores the early fusion of a still unexplored combination of visual, infrared, and LiDAR modalities. The contribution is described by suggesting a simple methodology that intends to facilitate the training and inference of a lightweight state-of-the-art object detector. The early fusion based detector achieves solid detection recalls up to 99% for all cases of sensor failure and extreme weather conditions such as glary, dark, and foggy scenarios in fair real-time inference duration below 6 ms.
Collapse
Affiliation(s)
- Francisco Soares Neves
- Faculty of Engineering, University of Porto (FEUP), 4200-465 Porto, Portugal
- Centre for Robotics and Autonomous Systems—INESC TEC, 4200-465 Porto, Portugal
| | - Rafael Marques Claro
- Faculty of Engineering, University of Porto (FEUP), 4200-465 Porto, Portugal
- Centre for Robotics and Autonomous Systems—INESC TEC, 4200-465 Porto, Portugal
| | - Andry Maykol Pinto
- Faculty of Engineering, University of Porto (FEUP), 4200-465 Porto, Portugal
- Centre for Robotics and Autonomous Systems—INESC TEC, 4200-465 Porto, Portugal
| |
Collapse
|
8
|
Multiple vehicle cooperation and collision avoidance in automated vehicles: survey and an AI-enabled conceptual framework. Sci Rep 2023; 13:603. [PMID: 36635336 PMCID: PMC9837199 DOI: 10.1038/s41598-022-27026-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 12/23/2022] [Indexed: 01/14/2023] Open
Abstract
Prospective customers are becoming more concerned about safety and comfort as the automobile industry swings toward automated vehicles (AVs). A comprehensive evaluation of recent AVs collision data indicates that modern automated driving systems are prone to rear-end collisions, usually leading to multiple-vehicle collisions. Moreover, most investigations into severe traffic conditions are confined to single-vehicle collisions. This work reviewed diverse techniques of existing literature to provide planning procedures for multiple vehicle cooperation and collision avoidance (MVCCA) strategies in AVs while also considering their performance and social impact viewpoints. Firstly, we investigate and tabulate the existing MVCCA techniques associated with single-vehicle collision avoidance perspectives. Then, current achievements are extensively evaluated, challenges and flows are identified, and remedies are intelligently formed to exploit a taxonomy. This paper also aims to give readers an AI-enabled conceptual framework and a decision-making model with a concrete structure of the training network settings to bridge the gaps between current investigations. These findings are intended to shed insight into the benefits of the greater efficiency of AVs set-up for academics and policymakers. Lastly, the open research issues discussed in this survey will pave the way for the actual implementation of driverless automated traffic systems.
Collapse
|
9
|
Sakaguchi Y, Bakibillah ASM, Kamal MAS, Yamada K. A Cyber-Physical Framework for Optimal Coordination of Connected and Automated Vehicles on Multi-Lane Freeways. SENSORS (BASEL, SWITZERLAND) 2023; 23:611. [PMID: 36679409 PMCID: PMC9862362 DOI: 10.3390/s23020611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 12/29/2022] [Accepted: 12/31/2022] [Indexed: 06/17/2023]
Abstract
Uncoordinated driving behavior is one of the main reasons for bottlenecks on freeways. This paper presents a novel cyber-physical framework for optimal coordination of connected and automated vehicles (CAVs) on multi-lane freeways. We consider that all vehicles are connected to a cloud-based computing framework, where a traffic coordination system optimizes the target trajectories of individual vehicles for smooth and safe lane changing or merging. In the proposed framework, the vehicles are coordinated into groups or platoons, and their trajectories are successively optimized in a receding horizon control (RHC) approach. Optimization of the traffic coordination system aims to provide sufficient gaps when a lane change is necessary while minimizing the speed deviation and acceleration of all vehicles. The coordination information is then provided to individual vehicles equipped with local controllers, and each vehicle decides its control acceleration to follow the target trajectories while ensuring a safe distance. Our proposed method guarantees fast optimization and can be used in real-time. The proposed coordination system was evaluated using microscopic traffic simulations and benchmarked with the traditional driving (human-based) system. The results show significant improvement in fuel economy, average velocity, and travel time for various traffic volumes.
Collapse
Affiliation(s)
- Yuta Sakaguchi
- Graduate School of Science and Technology, Gunma University, Kiryu 376-8515, Japan
| | - A. S. M. Bakibillah
- Department of Systems and Control Engineering, School of Engineering, Tokyo Institute of Technology, Tokyo 152-8552, Japan
| | - Md Abdus Samad Kamal
- Graduate School of Science and Technology, Gunma University, Kiryu 376-8515, Japan
| | - Kou Yamada
- Graduate School of Science and Technology, Gunma University, Kiryu 376-8515, Japan
| |
Collapse
|
10
|
Malik S, Khan MA, El-Sayed H, Khan J, Ullah O. How Do Autonomous Vehicles Decide? SENSORS (BASEL, SWITZERLAND) 2022; 23:317. [PMID: 36616915 PMCID: PMC9823427 DOI: 10.3390/s23010317] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 10/01/2022] [Accepted: 10/03/2022] [Indexed: 06/17/2023]
Abstract
The advancement in sensor technologies, mobile network technologies, and artificial intelligence has pushed the boundaries of different verticals, e.g., eHealth and autonomous driving. Statistics show that more than one million people are killed in traffic accidents yearly, where the vast majority of the accidents are caused by human negligence. Higher-level autonomous driving has great potential to enhance road safety and traffic efficiency. One of the most crucial links to building an autonomous system is the task of decision-making. The ability of a vehicle to make robust decisions on its own by anticipating and evaluating future outcomes is what makes it intelligent. Planning and decision-making technology in autonomous driving becomes even more challenging, due to the diversity of the dynamic environments the vehicle operates in, the uncertainty in the sensor information, and the complex interaction with other road participants. A significant amount of research has been carried out toward deploying autonomous vehicles to solve plenty of issues, however, how to deal with the high-level decision-making in a complex, uncertain, and urban environment is a comparatively less explored area. This paper provides an analysis of decision-making solutions approaches for autonomous driving. Various categories of approaches are analyzed with a comparison to classical decision-making approaches. Following, a crucial range of research gaps and open challenges have been highlighted that need to be addressed before higher-level autonomous vehicles hit the roads. We believe this survey will contribute to the research of decision-making methods for autonomous vehicles in the future by equipping the researchers with an overview of decision-making technology, its potential solution approaches, and challenges.
Collapse
Affiliation(s)
- Sumbal Malik
- College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
- Emirates Center for Mobility Research (ECMR), United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
| | - Manzoor Ahmed Khan
- College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
- Emirates Center for Mobility Research (ECMR), United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
| | - Hesham El-Sayed
- College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
- Emirates Center for Mobility Research (ECMR), United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
| | - Jalal Khan
- College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
| | - Obaid Ullah
- College of Information Technology, United Arab Emirates University, Abu Dhabi 15551, United Arab Emirates
| |
Collapse
|
11
|
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review. Diagnostics (Basel) 2022; 12:diagnostics12123111. [PMID: 36553119 PMCID: PMC9777253 DOI: 10.3390/diagnostics12123111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial intelligence (AI), a rousing advancement disrupting a wide spectrum of applications with remarkable betterment, has continued to gain momentum over the past decades. Within breast imaging, AI, especially machine learning and deep learning, honed with unlimited cross-data/case referencing, has found great utility encompassing four facets: screening and detection, diagnosis, disease monitoring, and data management as a whole. Over the years, breast cancer has been the apex of the cancer cumulative risk ranking for women across the six continents, existing in variegated forms and offering a complicated context in medical decisions. Realizing the ever-increasing demand for quality healthcare, contemporary AI has been envisioned to make great strides in clinical data management and perception, with the capability to detect indeterminate significance, predict prognostication, and correlate available data into a meaningful clinical endpoint. Here, the authors captured the review works over the past decades, focusing on AI in breast imaging, and systematized the included works into one usable document, which is termed an umbrella review. The present study aims to provide a panoramic view of how AI is poised to enhance breast imaging procedures. Evidence-based scientometric analysis was performed in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, resulting in 71 included review works. This study aims to synthesize, collate, and correlate the included review works, thereby identifying the patterns, trends, quality, and types of the included works, captured by the structured search strategy. The present study is intended to serve as a "one-stop center" synthesis and provide a holistic bird's eye view to readers, ranging from newcomers to existing researchers and relevant stakeholders, on the topic of interest.
Collapse
|
12
|
Da Lio M, Cherubini A, Papini GPR, Plebe A. Complex self-driving behaviours emerging from affordance competition in layered control architectures. COGN SYST RES 2022. [DOI: 10.1016/j.cogsys.2022.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
13
|
Muacevic A, Adler JR, Jones RH, Collins HR, Kabakus IM, McBee MP. COVID-19 Diagnosis on Chest Radiograph Using Artificial Intelligence. Cureus 2022; 14:e31897. [PMID: 36579217 PMCID: PMC9792347 DOI: 10.7759/cureus.31897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2022] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) pandemic has disrupted the world since 2019, causing significant morbidity and mortality in developed and developing countries alike. Although substantial resources have been diverted to developing diagnostic, preventative, and treatment measures, disparities in the availability and efficacy of these tools vary across countries. We seek to assess the ability of commercial artificial intelligence (AI) technology to diagnose COVID-19 by analyzing chest radiographs. MATERIALS AND METHODS Chest radiographs taken from symptomatic patients within two days of polymerase chain reaction (PCR) tests were assessed for COVID-19 infection by board-certified radiologists and commercially available AI software. Sixty patients with negative and 60 with positive COVID reverse transcription-polymerase chain reaction (RT-PCR) tests were chosen. Results were compared against results of the PCR test for accuracy and statistically analyzed by receiver operating characteristic (ROC) curves along with area under the curve (AUC) values. RESULTS A total of 120 chest radiographs (60 positive and 60 negative RT-PCR tests) radiographs were analyzed. The AI software performed significantly better than chance (p = 0.001) and did not differ significantly from the radiologist ROC curve (p = 0.78). CONCLUSION Commercially available AI software was not inferior compared with trained radiologists in accurately identifying COVID-19 cases by analyzing radiographs. While RT-PCR testing remains the standard, current advances in AI help correctly analyze chest radiographs to diagnose COVID-19 infection.
Collapse
|
14
|
Schulte-Tigges J, Förster M, Nikolovski G, Reke M, Ferrein A, Kaszner D, Matheis D, Walter T. Benchmarking of Various LiDAR Sensors for Use in Self-Driving Vehicles in Real-World Environments. SENSORS (BASEL, SWITZERLAND) 2022; 22:7146. [PMID: 36236247 PMCID: PMC9572247 DOI: 10.3390/s22197146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/12/2022] [Accepted: 09/14/2022] [Indexed: 06/16/2023]
Abstract
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars.
Collapse
Affiliation(s)
- Joschua Schulte-Tigges
- Mobile Autonomous Systems and Cognitive Robotics Institute, FH Aachen—Aachen University of Applied Sciences, 52066 Aachen, Germany
| | - Marco Förster
- Mobile Autonomous Systems and Cognitive Robotics Institute, FH Aachen—Aachen University of Applied Sciences, 52066 Aachen, Germany
| | - Gjorgji Nikolovski
- Mobile Autonomous Systems and Cognitive Robotics Institute, FH Aachen—Aachen University of Applied Sciences, 52066 Aachen, Germany
| | - Michael Reke
- Mobile Autonomous Systems and Cognitive Robotics Institute, FH Aachen—Aachen University of Applied Sciences, 52066 Aachen, Germany
| | - Alexander Ferrein
- Mobile Autonomous Systems and Cognitive Robotics Institute, FH Aachen—Aachen University of Applied Sciences, 52066 Aachen, Germany
| | - Daniel Kaszner
- Hyundai Motor Europe Technical Center GmbH, 65428 Rüsselsheim am Main, Germany
| | - Dominik Matheis
- Hyundai Motor Europe Technical Center GmbH, 65428 Rüsselsheim am Main, Germany
| | - Thomas Walter
- Hyundai Motor Europe Technical Center GmbH, 65428 Rüsselsheim am Main, Germany
| |
Collapse
|
15
|
New Paradigm of Sustainable Urban Mobility: Electric and Autonomous Vehicles—A Review and Bibliometric Analysis. SUSTAINABILITY 2022. [DOI: 10.3390/su14159525] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
The growing relevance of sustainability, as well as the necessity to replace traditional forms of transportation with sustainable ones, has made sustainable urban mobility an imperative. In order to respond to the ever-increasing need to develop sustainable modes of transport, the importance of electric, autonomous, and electric autonomous vehicles is increasingly emphasized. In addition, as trends of growth and development in electric autonomous vehicle technology are increasing, one of the questions that has appeared is whether autonomous electric vehicles represent one of the mechanisms that will be used to increase the sustainability of urban mobility. With this in mind, the results of a systematic analysis of existing research in the WOS and Scopus databases using the keywords “urban mobility”, “electric vehicles”, and “autonomous vehicles” was carried out to identify research trends in the use of autonomous electric vehicles in urban areas. The research showed that authors focus on the advantages and disadvantages of autonomous electric vehicles and their usage in the urban mobility system, but an insufficient number of authors consider and define the need to plan the transition towards incorporating autonomous electric vehicles into the urban system. The results of this research also indicate an insufficient number of papers that research and describe the application of autonomous electric vehicles in distribution logistics. This paper provides an overview of existing research related to autonomous electric vehicles and the challenges of transition in the context of infrastructure and the development of a culture of sustainability among urban residents.
Collapse
|
16
|
Khan MA, El Sayed H, Malik S, Zia MT, Alkaabi N, Khan J. A journey towards fully autonomous driving - fueled by a smart communication system. VEHICULAR COMMUNICATIONS 2022; 36:100476. [DOI: 10.1016/j.vehcom.2022.100476] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
17
|
A Robust Gaussian Process-Based LiDAR Ground Segmentation Algorithm for Autonomous Driving. MACHINES 2022. [DOI: 10.3390/machines10070507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Robust and precise vehicle detection is the prerequisite for decision-making and motion planning in autonomous driving. Vehicle detection algorithms follow three steps: ground segmentation, obstacle clustering and bounding box fitting. The ground segmentation result directly affects the input of the subsequent obstacle clustering algorithms. Aiming at the problems of over-segmentation and under-segmentation in traditional ground segmentation algorithms, a ground segmentation algorithm based on Gaussian process is proposed in this paper. To ensure accurate search of real ground candidate points as training data for Gaussian process, the proposed algorithm introduces the height and slope criteria, which is more reasonable than the use of fixed height threshold for searching. After that, a sparse covariance function is introduced as the kernel function for calculation in Gaussian process. This function is more suitable for ground segmentation situation the radial basis function (RBF). The proposed algorithm is tested on our autonomous driving experimental platform and the public autonomous driving dataset KITTI, compared with the most used RANSAC algorithm and ray ground filter algorithm. Experiment results show that the proposed algorithm can avoid obvious over-segmentation and under-segmentation. In addition, compared with the RBF, the introduction of the sparse covariance function also reduces the computation time by 37.26%.
Collapse
|
18
|
Combining Event-Based Maneuver Selection and MPC Based Trajectory Generation in Autonomous Driving. ELECTRONICS 2022. [DOI: 10.3390/electronics11101518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Maneuver planning, which plays a key role in selecting desired lanes and speeds, is an essential element of autonomous driving. Generally, for a vehicle driving on a multilane road, there are several potential maneuvers in both longitudinal and lateral directions. Selecting the best maneuver from the various options represents a significant challenge. In this paper, we propose a maneuver selection algorithm and combine it with a trajectory generation algorithm, which is based on model predictive control (MPC). The maneuver selection method is a higher-level planner, which selects only one maneuver from all possible maneuvers based on the current situation and delivers it to a lower-level MPC-based trajectory tracking controller. The effectiveness of the proposed algorithm is validated by simulating an overtaking scenario on a multilane highway.
Collapse
|
19
|
Could Technology and Intelligent Transport Systems Help Improve Mobility in an Emerging Country? Challenges, Opportunities, Gaps and Other Evidence from the Caribbean. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094759] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Apart from constituting a topic of high relevance for transport planners and policymakers, support technologies for traffic have the potential to bring significant benefits to mobility. In addition, there are groups of “high potential” users, such as young adults, who constitute an essential part of the current market. Notwithstanding, and especially in low and middle-income countries (LMICs), their knowledge and acceptance remain understudied. This study aimed to assess the appraisal of intelligent transport systems (ITS) and other technological developments applicable to mobility among Dominican young adults. Methods: In this study, we used the data gathered from 1414 Dominicans aged between 18 and 40, responding to the National Survey on Mobility in 2018 and 2019. Results: Overall, and although there is a relatively high acceptance, attributed value, and attitudinal predisposition towards both intelligent transportation systems and various support technologies applicable to mobility, the actual usage rates remain considerably low, and this is probably exacerbated by the low and middle-income status of the country. Conclusions: The findings of this study suggest the need to strengthen information and communication flows over emerging mobility-related technologies and develop further awareness of the potential benefits of technological developments for everyday transport dynamics.
Collapse
|
20
|
Local Path Planning for Autonomous Vehicles Based on the Natural Behavior of the Biological Action-Perception Motion. ENERGIES 2022. [DOI: 10.3390/en15051769] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Local path planning is a key task for the motion planners of autonomous vehicles since it commands the vehicle across its environment while avoiding any obstacles. To perform this task, the local path planner generates a trajectory and a velocity profile, which are then sent to the vehicle’s actuators. This paper proposes a new local path planner for autonomous vehicles based on the Attractor Dynamic Approach (ADA), which was inspired by the behavior of movement of living beings, along with an algorithm that takes into account four acceleration policies, the ST dynamic vehicle model, and several constraints regarding the comfort and security. The original functions that define the ADA were modified in order to adapt it to the non-holonomic vehicle’s constraints and to improve its response when an impact scenario is detected. The present approach is validated in a well-known simulator for autonomous vehicles under three representative cases of study where the vehicle was capable of generating local paths that ensure the security of the vehicle in such cases. The results show that the approach proposed in this paper is a promising tool for the local path planning of autonomous vehicles since it is able to generate trajectories that are both safe and efficient.
Collapse
|
21
|
Imaginaries of Road Transport Automation in Finnish Governance Culture—A Critical Discourse Analysis. SUSTAINABILITY 2022. [DOI: 10.3390/su14031437] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
As transport automation technology continues to emerge, there is a need to engage in the questions of its governing—to find a balance between unreflective enablement and rigid control. An increasing body of literature has begun to address the topic, but only a few studies have examined discourse and culture as central components of the related governance processes. This article aims to analyse the discourse surrounding self-driving vehicles in the Finnish context by drawing from the concept of sociotechnical imaginaries. The critical discourse analysis framework is applied to study a comprehensive set of documents published by Finnish national-level governmental bodies from 2013 to 2020. The analysis identifies four imagined ways of implementing self-driving vehicles into the Finnish transport system and a large set of mostly positive anticipated implications. Moreover, the analysis illustrates the transport automation imaginary’s cultural and spatial detachment, most obvious in the lack of detail and the disconnection between the imagined implementations and the anticipated implications. The findings are convergent with findings from other governance contexts, where discourse has been largely characterised by an unjustified optimism and strong determinism related to the wedlock with the automobility regime. If left unaddressed, such lack of reflectivity will not just lead to a plethora of undesired implications for Finnish society at large but will also signify a failure in developing an adaptive governance culture needed to face challenges of the 21st century.
Collapse
|
22
|
Communication of Autonomous Vehicles in Road Accidents for Emergency Help in Healthcare Industries. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2506830. [PMID: 35126913 PMCID: PMC8808121 DOI: 10.1155/2022/2506830] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/26/2021] [Accepted: 12/27/2021] [Indexed: 11/17/2022]
Abstract
Autonomous cars like driverless motors are considered solely in science fiction films; however, in 2019, they are turning into a veracity and reality. People all around the world are excited to see the driverless automobile in reality. Selfless vehicles do not want human intervention. A completely driverless car is nonetheless at a superior trying out stage; however, in part due to computerized technological know-how, it has been around for the last few years. A partly computerized car has points such as lane keeping, automatic braking, and adaptive cruise control. With a self-sustaining automobile device, the vehicle has to feel the environment and discover objects, and with the assistance of GPS, it should run on the right navigation course even while obeying site visitors and transportation rules. In addition to that, the safety of passengers and pedestrians is also very important. This capability to keep away from collisions with barriers and accidents during assemble is important. To forestall the self-sufficient vehicle, this autonomous system helps a lot. The sensor used in this gadget identifies the objects in front of the car and stops the car, directing it to go on a specific course to keep away from accidents and communicate with each other. This accident-avoidance gadget and communication system help the self-sustaining car to attain the vacation spot via coaching the vehicle with synthetic intelligence. By making the motors smartest the lifestyles fashion additionally turns into smartest.
Collapse
|
23
|
Fernandes D, Afonso T, Girão P, Gonzalez D, Silva A, Névoa R, Novais P, Monteiro J, Melo-Pinto P. Real-Time 3D Object Detection and SLAM Fusion in a Low-Cost LiDAR Test Vehicle Setup. SENSORS 2021; 21:s21248381. [PMID: 34960468 PMCID: PMC8705987 DOI: 10.3390/s21248381] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 12/03/2021] [Accepted: 12/07/2021] [Indexed: 12/03/2022]
Abstract
Recently released research about deep learning applications related to perception for autonomous driving focuses heavily on the usage of LiDAR point cloud data as input for the neural networks, highlighting the importance of LiDAR technology in the field of Autonomous Driving (AD). In this sense, a great percentage of the vehicle platforms used to create the datasets released for the development of these neural networks, as well as some AD commercial solutions available on the market, heavily invest in an array of sensors, including a large number of sensors as well as several sensor modalities. However, these costs create a barrier to entry for low-cost solutions for the performance of critical perception tasks such as Object Detection and SLAM. This paper explores current vehicle platforms and proposes a low-cost, LiDAR-based test vehicle platform capable of running critical perception tasks (Object Detection and SLAM) in real time. Additionally, we propose the creation of a deep learning-based inference model for Object Detection deployed in a resource-constrained device, as well as a graph-based SLAM implementation, providing important considerations, explored while taking into account the real-time processing requirement and presenting relevant results demonstrating the usability of the developed work in the context of the proposed low-cost platform.
Collapse
Affiliation(s)
- Duarte Fernandes
- Algoritmi Centre, University of Minho, 4800-058 Guimarães, Portugal; (D.F.); (A.S.); (R.N.); (P.N.); (J.M.); (P.M.-P.)
| | - Tiago Afonso
- Bosch Company, 4700-113 Braga, Portugal; (T.A.); (P.G.)
| | - Pedro Girão
- Bosch Company, 4700-113 Braga, Portugal; (T.A.); (P.G.)
| | - Dibet Gonzalez
- Computer Graphics Center, University of Minho, 4800-058 Guimarães, Portugal
- Correspondence:
| | - António Silva
- Algoritmi Centre, University of Minho, 4800-058 Guimarães, Portugal; (D.F.); (A.S.); (R.N.); (P.N.); (J.M.); (P.M.-P.)
| | - Rafael Névoa
- Algoritmi Centre, University of Minho, 4800-058 Guimarães, Portugal; (D.F.); (A.S.); (R.N.); (P.N.); (J.M.); (P.M.-P.)
| | - Paulo Novais
- Algoritmi Centre, University of Minho, 4800-058 Guimarães, Portugal; (D.F.); (A.S.); (R.N.); (P.N.); (J.M.); (P.M.-P.)
| | - João Monteiro
- Algoritmi Centre, University of Minho, 4800-058 Guimarães, Portugal; (D.F.); (A.S.); (R.N.); (P.N.); (J.M.); (P.M.-P.)
| | - Pedro Melo-Pinto
- Algoritmi Centre, University of Minho, 4800-058 Guimarães, Portugal; (D.F.); (A.S.); (R.N.); (P.N.); (J.M.); (P.M.-P.)
- Department of Engineering, University of Trás-os-Montes and Alto Douro, 5000-801 Vila Real, Portugal
| |
Collapse
|
24
|
Grosso M, Cristinel Raileanu I, Krause J, Alonso Raposo M, Duboz A, Garus A, Mourtzouchou A, Ciuffo B. How will vehicle automation and electrification affect the automotive maintenance, repair sector? TRANSPORTATION RESEARCH INTERDISCIPLINARY PERSPECTIVES 2021; 12:None. [PMID: 35072055 PMCID: PMC8754085 DOI: 10.1016/j.trip.2021.100495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 10/25/2021] [Accepted: 10/30/2021] [Indexed: 06/14/2023]
Abstract
Automation and electrification in road transport are trends that will influence several economic sectors of the European economy. The automotive maintenance and repair (M&R) sector will experience the effects of such transitions in the long term. This paper assesses the research in the road transport to derive the factors that may influence the M&R demand based on Battery Electric Vehicles (BEVs) and Autonomous Vehicles (AVs) uptake. Starting from current scientific research and grounded on interviews with experts, the paper reviews major drivers influencing M&R demand and provides indications on possible future effects. While for BEVs, previous work has been conducted to estimate the M&R cost variations, the research addressing the impacts of AVs deployment on the M&R sector is at its incipient stage, hence the views of experts were paramount to shed light on this topic. We identified a scientific consensus that BEVs have less M&R requirements compared with Conventional Vehicles (CVs). For AVs, our analysis and expert views identify some important factors influencing M&R requirements: hardware components, software that enables autonomy, the rise in vehicle kilometres travelled leading to higher wear and tear of replaceable parts, the need for adequate cleaning services, especially for fleets and shared vehicles. Further work should look at the impact of regulations and the non-insurable risks linked to M&R requirements.
Collapse
Affiliation(s)
- Monica Grosso
- Joint Research Centre, European Commission, Ispra, Italy
| | | | - Jette Krause
- Joint Research Centre, European Commission, Ispra, Italy
| | | | - Amandine Duboz
- Joint Research Centre, European Commission, Ispra, Italy
| | - Ada Garus
- Joint Research Centre, European Commission, Ispra, Italy
| | | | - Biagio Ciuffo
- Joint Research Centre, European Commission, Ispra, Italy
| |
Collapse
|
25
|
Maldonado-Romo J, Aldape-Pérez M, Rodríguez-Molina A. Path Planning Generator with Metadata through a Domain Change by GAN between Physical and Virtual Environments. SENSORS 2021; 21:s21227667. [PMID: 34833741 PMCID: PMC8623835 DOI: 10.3390/s21227667] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Revised: 11/08/2021] [Accepted: 11/16/2021] [Indexed: 11/16/2022]
Abstract
Increasingly, robotic systems require a level of perception of the scenario to interact in real-time, but they also require specialized equipment such as sensors to reach high performance standards adequately. Therefore, it is essential to explore alternatives to reduce the costs for these systems. For example, a common problem attempted by intelligent robotic systems is path planning. This problem contains different subsystems such as perception, location, control, and planning, and demands a quick response time. Consequently, the design of the solutions is limited and requires specialized elements, increasing the cost and time development. Secondly, virtual reality is employed to train and evaluate algorithms, generating virtual data. For this reason, the virtual dataset can be connected with the authentic world through Generative Adversarial Networks (GANs), reducing time development and employing limited samples of the physical world. To describe the performance, metadata information details the properties of the agents in an environment. The metadata approach is tested with an augmented reality system and a micro aerial vehicle (MAV), where both systems are executed in an authentic environment and implemented in embedded devices. This development helps to guide alternatives to reduce resources and costs, but external factors limit these implementations, such as the illumination variation, because the system depends on only a conventional camera.
Collapse
Affiliation(s)
- Javier Maldonado-Romo
- Postgraduate Department, Instituto Politécnico Nacional, CIDETEC, Mexico City 07700, Mexico;
- Correspondence: ; Tel.: +52-555-729-6000
| | - Mario Aldape-Pérez
- Postgraduate Department, Instituto Politécnico Nacional, CIDETEC, Mexico City 07700, Mexico;
| | - Alejandro Rodríguez-Molina
- Tecnológico Nacional de México/IT de Tlalnepantla, Research and Postgraduate Division, Estado de México 54070, Mexico;
| |
Collapse
|
26
|
Galvao LG, Abbod M, Kalganova T, Palade V, Huda MN. Pedestrian and Vehicle Detection in Autonomous Vehicle Perception Systems-A Review. SENSORS (BASEL, SWITZERLAND) 2021; 21:7267. [PMID: 34770575 PMCID: PMC8587128 DOI: 10.3390/s21217267] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 10/16/2021] [Accepted: 10/23/2021] [Indexed: 11/16/2022]
Abstract
Autonomous Vehicles (AVs) have the potential to solve many traffic problems, such as accidents, congestion and pollution. However, there are still challenges to overcome, for instance, AVs need to accurately perceive their environment to safely navigate in busy urban scenarios. The aim of this paper is to review recent articles on computer vision techniques that can be used to build an AV perception system. AV perception systems need to accurately detect non-static objects and predict their behaviour, as well as to detect static objects and recognise the information they are providing. This paper, in particular, focuses on the computer vision techniques used to detect pedestrians and vehicles. There have been many papers and reviews on pedestrians and vehicles detection so far. However, most of the past papers only reviewed pedestrian or vehicle detection separately. This review aims to present an overview of the AV systems in general, and then review and investigate several detection computer vision techniques for pedestrians and vehicles. The review concludes that both traditional and Deep Learning (DL) techniques have been used for pedestrian and vehicle detection; however, DL techniques have shown the best results. Although good detection results have been achieved for pedestrians and vehicles, the current algorithms still struggle to detect small, occluded, and truncated objects. In addition, there is limited research on how to improve detection performance in difficult light and weather conditions. Most of the algorithms have been tested on well-recognised datasets such as Caltech and KITTI; however, these datasets have their own limitations. Therefore, this paper recommends that future works should be implemented on more new challenging datasets, such as PIE and BDD100K.
Collapse
Affiliation(s)
- Luiz G. Galvao
- Department of Electronic and Electrical Engineering, Brunel University London, Kingston Ln, Uxbridge UB8 3PH, UK; (M.A.); (T.K.)
| | - Maysam Abbod
- Department of Electronic and Electrical Engineering, Brunel University London, Kingston Ln, Uxbridge UB8 3PH, UK; (M.A.); (T.K.)
| | - Tatiana Kalganova
- Department of Electronic and Electrical Engineering, Brunel University London, Kingston Ln, Uxbridge UB8 3PH, UK; (M.A.); (T.K.)
| | - Vasile Palade
- Centre for Data Science, Coventry University, Priory Road, Coventry CV1 5FB, UK;
| | - Md Nazmul Huda
- Department of Electronic and Electrical Engineering, Brunel University London, Kingston Ln, Uxbridge UB8 3PH, UK; (M.A.); (T.K.)
| |
Collapse
|
27
|
Ginerica C, Zaha M, Gogianu F, Busoniu L, Trasnea B, Grigorescu S. ObserveNet Control: A Vision-Dynamics Learning Approach to Predictive Control in Autonomous Vehicles. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3096157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
28
|
sTetro-Deep Learning Powered Staircase Cleaning and Maintenance Reconfigurable Robot. SENSORS 2021; 21:s21186279. [PMID: 34577486 PMCID: PMC8473228 DOI: 10.3390/s21186279] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 09/14/2021] [Accepted: 09/16/2021] [Indexed: 11/24/2022]
Abstract
Staircase cleaning is a crucial and time-consuming task for maintenance of multistory apartments and commercial buildings. There are many commercially available autonomous cleaning robots in the market for building maintenance, but few of them are designed for staircase cleaning. A key challenge for automating staircase cleaning robots involves the design of Environmental Perception Systems (EPS), which assist the robot in determining and navigating staircases. This system also recognizes obstacles and debris for safe navigation and efficient cleaning while climbing the staircase. This work proposes an operational framework leveraging the vision based EPS for the modular re-configurable maintenance robot, called sTetro. The proposed system uses an SSD MobileNet real-time object detection model to recognize staircases, obstacles and debris. Furthermore, the model filters out false detection of staircases by fusion of depth information through the use of a MobileNet and SVM. The system uses a contour detection algorithm to localize the first step of the staircase and depth clustering scheme for obstacle and debris localization. The framework has been deployed on the sTetro robot using the Jetson Nano hardware from NVIDIA and tested with multistory staircases. The experimental results show that the entire framework takes an average of 310 ms to run and achieves an accuracy of 94.32% for staircase recognition tasks and 93.81% accuracy for obstacle and debris detection tasks during real operation of the robot.
Collapse
|
29
|
Development and Verification of Infrastructure-Assisted Automated Driving Functions. ELECTRONICS 2021. [DOI: 10.3390/electronics10172161] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Automated vehicles we have on public roads today are capable of up to SAE Level-3 conditional autonomy according to the SAE J3016 Standard taxonomy, where the driver is the main responsible for the driving safety. All the decision-making processes of the system depend on computations performed on the ego vehicle and utilizing only on-board sensor information, mimicking the perception of a human driver. It can be conjectured that for higher levels of autonomy, on-board sensor information will not be sufficient alone. Infrastructure assistance will, therefore, be necessary to ensure the partial or full responsibility of the driving safety. With higher penetration rates of automated vehicles however, new problems will arise. It is expected that automated driving and particularly automated vehicle platoons will lead to more road damage in the form of rutting. Inspired by this, the EU project ESRIUM investigates infrastructure assisted routing recommendations utilizing C-ITS communications. In this respect, specially designed ADAS functions are being developed with capabilities to adapt their behavior according to specific routing recommendations. Automated vehicles equipped with such ADAS functions will be able to reduce road damage. The current paper presents the specific use cases, as well as the developed C-ITS assisted ADAS functions together with their verification results utilizing a simulation framework.
Collapse
|
30
|
Liu Q, Wang X, Wu X, Glaser Y, He L. Crash comparison of autonomous and conventional vehicles using pre-crash scenario typology. ACCIDENT; ANALYSIS AND PREVENTION 2021; 159:106281. [PMID: 34273622 DOI: 10.1016/j.aap.2021.106281] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 06/13/2023]
Abstract
Data-based research approaches to generate crash scenarios have mainly relied on conventional vehicle crashes and naturalistic driving data, and have not considered differences between the autonomous vehicle (AV) and conventional vehicle crashes. As the AV's presence on roadways continues to grow, its crash scenarios take on new importance for traffic safety. This study therefore obtained crash patterns using the United States Department of Transportation pre-crash scenario typology, and used statistical analysis to determine the differences between AV and conventional vehicle pre-crash scenarios. Analysis of 122 AV crashes and 2084 conventional vehicle crashes revealed 15 types of scenario for AVs and 26 for conventional vehicles. The two groups showed differences in type of scenario, and differed in the proportion of crashes when the scenario was the same. The most frequent AV pre-crash scenarios were rear-end collisions (52.46%) and lane change collisions (18.85%), with the proportion of AVs rear-ended by conventional vehicles occurring with a frequency 1.6 times that of conventional vehicles. An in-depth crash investigation was conducted of the characteristics and causes of four AV pre-crash scenarios, summarized from the perspectives of perception and path planning. The perception-reaction time (PRT) difference between AVs and human drivers, AV's inaccurate identification of the intention of other vehicles to change lanes, and AV's insufficient path planning combining time and space dimensions were found to be important causes for the AV crashes. By increasing understanding of the complex characteristics of AV pre-crash scenarios, this analysis will encourage cooperation with vehicle manufacturers and AV technology companies for further study of crash causation toward the goals of improved test scenario construction and optimization of the AV's automated driving system (ADS).
Collapse
Affiliation(s)
- Qian Liu
- College of Transportation Engineering, Tongji University, Shanghai 201804, China; The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Shanghai 201804, China
| | - Xuesong Wang
- College of Transportation Engineering, Tongji University, Shanghai 201804, China; The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Shanghai 201804, China; National Engineering Laboratory for Integrated Optimization of Road Traffic and Safety Analysis Technologies, 88 Qianrong Rd, Wuxi 214151, China.
| | - Xiangbin Wu
- Intelligent Driving Lab, Intel Labs China, No. 2 South Kexueyuan Road, Beijing 100190, China
| | - Yi Glaser
- Global Safety Center, GM, Warren, MI 48092-2031, USA
| | - Linjia He
- College of Transportation Engineering, Tongji University, Shanghai 201804, China; The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Shanghai 201804, China
| |
Collapse
|
31
|
Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. REMOTE SENSING 2021. [DOI: 10.3390/rs13163239] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
LiDAR occupies a vital position in self-driving as the advanced detection technology enables autonomous vehicles (AVs) to obtain much environmental information. Ground segmentation for LiDAR point cloud is a crucial procedure to ensure AVs’ driving safety. However, some current algorithms suffer from embarrassments such as unavailability on complex terrains, excessive time and memory usage, and additional pre-training requirements. The Jump-Convolution-Process (JCP) is proposed to solve these issues. JCP converts the segmentation problem of the 3D point cloud into the smoothing problem of the 2D image and takes little time to improve the segmentation effect significantly. First, the point cloud marked by an improved local feature extraction algorithm is projected onto an RGB image. Then, the pixel value is initialized with the points’ label and continuously updated according to image convolution. Finally, a jump operation is introduced in the convolution process to perform calculations only on the low-confidence points filtered by the credibility propagation algorithm, reducing the time cost. Experiments on three datasets show that our approach has a better segmentation accuracy and terrain adaptability than those of the three existing methods. Meanwhile, the average time for the proposed method to deal with one scan data of 64-beam and 128-beam LiDAR is only 8.61 ms and 15.62 ms, which fully meets the AVs’ requirement for real-time performance.
Collapse
|
32
|
Borgmann B, Schatz V, Hammer M, Hebel M, Arens M, Stilla U. MODISSA: a multipurpose platform for the prototypical realization of vehicle-related applications using optical sensors. APPLIED OPTICS 2021; 60:F50-F65. [PMID: 34612862 DOI: 10.1364/ao.423599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 05/09/2021] [Indexed: 06/13/2023]
Abstract
We present the current state of development of the sensor-equipped car MODISSA, with which Fraunhofer IOSB realizes a configurable experimental platform for hardware evaluation and software development in the context of mobile mapping and vehicle-related safety and protection. MODISSA is based on a van that has successively been equipped with a variety of optical sensors over the past few years, and contains hardware for complete raw data acquisition, georeferencing, real-time data analysis, and immediate visualization on in-car displays. We demonstrate the capabilities of MODISSA by giving a deeper insight into experiments with its specific configuration in the scope of three different applications. Other research groups can benefit from these experiences when setting up their own mobile sensor system, especially regarding the selection of hardware and software, the knowledge of possible sources of error, and the handling of the acquired sensor data.
Collapse
|
33
|
Cooperative Intersection with Misperception in Partially Connected and Automated Traffic. SENSORS 2021; 21:s21155003. [PMID: 34372240 PMCID: PMC8348399 DOI: 10.3390/s21155003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 07/19/2021] [Accepted: 07/20/2021] [Indexed: 11/17/2022]
Abstract
The emerging connected and automated vehicle (CAV) has the potential to improve traffic efficiency and safety. With the cooperation between vehicles and intersection, CAVs can adjust speed and form platoons to pass the intersection faster. However, perceptual errors may occur due to external conditions of vehicle sensors. Meanwhile, CAVs and conventional vehicles will coexist in the near future and imprecise perception needs to be tolerated in exchange for mobility. In this paper, we present a simulation model to capture the effect of vehicle perceptual error and time headway to the traffic performance at cooperative intersection, where the intelligent driver model (IDM) is extended by the Ornstein–Uhlenbeck process to describe the perceptual error dynamically. Then, we introduce the longitudinal control model to determine vehicle dynamics and role switching to form platoons and reduce frequent deceleration. Furthermore, to realize accurate perception and improve safety, we propose a data fusion scheme in which the Differential Global Positioning system (DGPS) data interpolates sensor data by the Kalman filter. Finally, a comprehensive study is presented on how the perceptual error and time headway affect crash, energy consumption as well as congestion at cooperative intersections in partially connected and automated traffic. The simulation results show the trade-off between the traffic efficiency and safety for which the number of accidents is reduced with larger vehicle intervals, but excessive time headway may result in low traffic efficiency and energy conversion. In addition, compared with an on-board sensor independently perception scheme, our proposed data fusion scheme improves the overall traffic flow, congestion time, and passenger comfort as well as energy efficiency under various CAV penetration rates.
Collapse
|
34
|
OctoPath: An OcTree-Based Self-Supervised Learning Approach to Local Trajectory Planning for Mobile Robots. SENSORS 2021; 21:s21113606. [PMID: 34067237 PMCID: PMC8196842 DOI: 10.3390/s21113606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 05/09/2021] [Accepted: 05/19/2021] [Indexed: 11/24/2022]
Abstract
Autonomous mobile robots are usually faced with challenging situations when driving in complex environments. Namely, they have to recognize the static and dynamic obstacles, plan the driving path and execute their motion. For addressing the issue of perception and path planning, in this paper, we introduce OctoPath, which is an encoder-decoder deep neural network, trained in a self-supervised manner to predict the local optimal trajectory for the ego-vehicle. Using the discretization provided by a 3D octree environment model, our approach reformulates trajectory prediction as a classification problem with a configurable resolution. During training, OctoPath minimizes the error between the predicted and the manually driven trajectories in a given training dataset. This allows us to avoid the pitfall of regression-based trajectory estimation, in which there is an infinite state space for the output trajectory points. Environment sensing is performed using a 40-channel mechanical LiDAR sensor, fused with an inertial measurement unit and wheels odometry for state estimation. The experiments are performed both in simulation and real-life, using our own developed GridSim simulator and RovisLab’s Autonomous Mobile Test Unit platform. We evaluate the predictions of OctoPath in different driving scenarios, both indoor and outdoor, while benchmarking our system against a baseline hybrid A-Star algorithm and a regression-based supervised learning method, as well as against a CNN learning-based optimal path planning method.
Collapse
|
35
|
Ran J, Cao R, Cai J, Yu T, Zhao D, Wang Z. Development and Validation of a Nomogram for Preoperative Prediction of Lymph Node Metastasis in Lung Adenocarcinoma Based on Radiomics Signature and Deep Learning Signature. Front Oncol 2021; 11:585942. [PMID: 33968715 PMCID: PMC8101496 DOI: 10.3389/fonc.2021.585942] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 04/06/2021] [Indexed: 12/11/2022] Open
Abstract
Background and Purpose The preoperative LN (lymph node) status of patients with LUAD (lung adenocarcinoma) is a key factor for determining if systemic nodal dissection is required, which is usually confirmed after surgery. This study aimed to develop and validate a nomogram for preoperative prediction of LN metastasis in LUAD based on a radiomics signature and deep learning signature. Materials and Methods This retrospective study included a training cohort of 200 patients, an internal validation cohort of 40 patients, and an external validation cohort of 60 patients. Radiomics features were extracted from conventional CT (computed tomography) images. T-test and Extra-trees were performed for feature selection, and the selected features were combined using logistic regression to build the radiomics signature. The features and weights of the last fully connected layer of a CNN (convolutional neural network) were combined to obtain a deep learning signature. By incorporating clinical risk factors, the prediction model was developed using a multivariable logistic regression analysis, based on which the nomogram was developed. The calibration, discrimination and clinical values of the nomogram were evaluated. Results Multivariate logistic regression analysis showed that the radiomics signature, deep learning signature, and CT-reported LN status were independent predictors. The prediction model developed by all the independent predictors showed good discrimination (C-index, 0.820; 95% CI, 0.762 to 0.879) and calibration (Hosmer-Lemeshow test, P=0.193) capabilities for the training cohort. Additionally, the model achieved satisfactory discrimination (C-index, 0.861; 95% CI, 0.769 to 0.954) and calibration (Hosmer-Lemeshow test, P=0.775) when applied to the external validation cohort. An analysis of the decision curve showed that the nomogram had potential for clinical application. Conclusions This study presents a prediction model based on radiomics signature, deep learning signature, and CT-reported LN status that can be used to predict preoperative LN metastasis in patients with LUAD.
Collapse
Affiliation(s)
- Jia Ran
- Engineering Research Center of Molecular & Neuro-imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Ran Cao
- Engineering Research Center of Molecular & Neuro-imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Jiumei Cai
- Department of Medical Imaging, Cancer Hospital of China Medical University, Shenyang, China
| | - Tao Yu
- Department of Medical Imaging, Cancer Hospital of China Medical University, Shenyang, China.,Department of Medical Imaging, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Dan Zhao
- Department of Medical Imaging, Cancer Hospital of China Medical University, Shenyang, China.,Department of Medical Imaging, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Zhongliang Wang
- Engineering Research Center of Molecular & Neuro-imaging, Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| |
Collapse
|
36
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
37
|
Guy S, Jacquet C, Tsenkoff D, Argenson JN, Ollivier M. Deep learning for the radiographic diagnosis of proximal femur fractures: Limitations and programming issues. Orthop Traumatol Surg Res 2021; 107:102837. [PMID: 33529731 DOI: 10.1016/j.otsr.2021.102837] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 08/08/2020] [Accepted: 08/17/2020] [Indexed: 02/03/2023]
Abstract
INTRODUCTION Radiology is one of the domains where artificial intelligence (AI) yields encouraging results, with diagnostic accuracy that approaches that of experienced radiologists and physicians. Diagnostic errors in traumatology are rare but can have serious functional consequences. Using AI as a radiological diagnostic aid may be beneficial in the emergency room. Thus, an effective, low-cost software that helps with making radiographic diagnoses would be a relevant tool for current clinical practice, although this concept has rarely been evaluated in orthopedics for proximal femur fractures (PFF). This led us to conduct a prospective study with the goals of: 1) programming deep learning software to help make the diagnosis of PFF on radiographs and 2) to evaluate its performance. HYPOTHESIS It is possible to program an effective deep learning software to help make the diagnosis of PFF based on a limited number of radiographs. METHODS Our database consisted of 1309 radiographs: 963 had a PFF, while 346 did not. The sample size was increased 8-fold (resulting in 10,472 radiographs) using a validated technique. Each radiograph was evaluated by an orthopedic surgeon using RectLabel™ software (https://rectlabel.com), by differentiating between healthy and fractured zones. Fractures were classified according to the AO system. The deep learning algorithm was programmed on Tensorflow™ software (Google Brain, Santa Clara, Ca, USA, tensorflow.org). In all, 9425 annotated radiographs (90%) were used for the training phase and 1074 (10%) for the test phase. RESULTS The sensitivity of the algorithm was 61% for femoral neck fractures and 67% for trochanteric fractures. The specificity was 67% and 69%, the positive predictive value was 55% and 56%, while the negative predictive value was 74% and 78%, respectively. CONCLUSION Our results are not good enough for our algorithm to be used in current clinical practice. Programming of deep learning software with sufficient diagnostic accuracy can only be done with several tens of thousands of radiographs, or by using transfer learning. LEVEL OF EVIDENCE III; Diagnostic studies, Study of nonconsecutive patients, without consistently applied reference "gold" standard.
Collapse
Affiliation(s)
- Sylvain Guy
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France.
| | - Christophe Jacquet
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| | - Damien Tsenkoff
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| | - Jean-Noël Argenson
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| | - Matthieu Ollivier
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| |
Collapse
|
38
|
Deep Reinforcement Learning-Based Path Planning for Multi-Arm Manipulators with Periodically Moving Obstacles. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11062587] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In the workspace of robot manipulators in practice, it is common that there are both static and periodic moving obstacles. Existing results in the literature have been focusing mainly on the static obstacles. This paper is concerned with multi-arm manipulators with periodically moving obstacles. Due to the high-dimensional property and the moving obstacles, existing results suffer from finding the optimal path for given arbitrary starting and goal points. To solve the path planning problem, this paper presents a SAC-based (Soft actor–critic) path planning algorithm for multi-arm manipulators with periodically moving obstacles. In particular, the deep neural networks in the SAC are designed such that they utilize the position information of the moving obstacles over the past finite time horizon. In addition, the hindsight experience replay (HER) technique is employed to use the training data efficiently. In order to show the performance of the proposed SAC-based path planning, both simulation and experiment results using open manipulators are given.
Collapse
|
39
|
Coordination of Lateral Vehicle Control Systems Using Learning-Based Strategies. ENERGIES 2021. [DOI: 10.3390/en14051291] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The paper proposes a novel learning-based coordination strategy for lateral control systems of automated vehicles. The motivation of the research is to improve the performance level of the coordinated system compared to the conventional model-based reconfigurable solutions. During vehicle maneuvers, the coordinated control system provides torque vectoring and front-wheel steering angle in order to guarantee the various lateral dynamical performances. The performance specifications are guaranteed on two levels, i.e., primary performances are guaranteed by Linear Parameter Varying (LPV) controllers, while secondary performances (e.g., economy and comfort) are maintained by a reinforcement-learning-based (RL) controller. The coordination of the control systems is carried out by a supervisor. The effectiveness of the proposed coordinated control system is illustrated through high velocity vehicle maneuvers.
Collapse
|
40
|
Bin Issa R, Das M, Rahman MS, Barua M, Rhaman MK, Ripon KSN, Alam MGR. Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment. SENSORS 2021; 21:s21041468. [PMID: 33672476 PMCID: PMC7923439 DOI: 10.3390/s21041468] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 02/11/2021] [Accepted: 02/12/2021] [Indexed: 11/24/2022]
Abstract
Autonomous vehicle navigation in an unknown dynamic environment is crucial for both supervised- and Reinforcement Learning-based autonomous maneuvering. The cooperative fusion of these two learning approaches has the potential to be an effective mechanism to tackle indefinite environmental dynamics. Most of the state-of-the-art autonomous vehicle navigation systems are trained on a specific mapped model with familiar environmental dynamics. However, this research focuses on the cooperative fusion of supervised and Reinforcement Learning technologies for autonomous navigation of land vehicles in a dynamic and unknown environment. The Faster R-CNN, a supervised learning approach, identifies the ambient environmental obstacles for untroubled maneuver of the autonomous vehicle. Whereas, the training policies of Double Deep Q-Learning, a Reinforcement Learning approach, enable the autonomous agent to learn effective navigation decisions form the dynamic environment. The proposed model is primarily tested in a gaming environment similar to the real-world. It exhibits the overall efficiency and effectiveness in the maneuver of autonomous land vehicles.
Collapse
Affiliation(s)
- Razin Bin Issa
- Department of Computer Science and Engineering, School of Data and Sciences, BRAC University, 66 Mohakhali, Dhaka 1212, Bangladesh; (R.B.I.); (M.D.); (M.S.R.); (M.B.); (M.K.R.); (M.G.R.A.)
| | - Modhumonty Das
- Department of Computer Science and Engineering, School of Data and Sciences, BRAC University, 66 Mohakhali, Dhaka 1212, Bangladesh; (R.B.I.); (M.D.); (M.S.R.); (M.B.); (M.K.R.); (M.G.R.A.)
| | - Md. Saferi Rahman
- Department of Computer Science and Engineering, School of Data and Sciences, BRAC University, 66 Mohakhali, Dhaka 1212, Bangladesh; (R.B.I.); (M.D.); (M.S.R.); (M.B.); (M.K.R.); (M.G.R.A.)
| | - Monika Barua
- Department of Computer Science and Engineering, School of Data and Sciences, BRAC University, 66 Mohakhali, Dhaka 1212, Bangladesh; (R.B.I.); (M.D.); (M.S.R.); (M.B.); (M.K.R.); (M.G.R.A.)
| | - Md. Khalilur Rhaman
- Department of Computer Science and Engineering, School of Data and Sciences, BRAC University, 66 Mohakhali, Dhaka 1212, Bangladesh; (R.B.I.); (M.D.); (M.S.R.); (M.B.); (M.K.R.); (M.G.R.A.)
| | - Kazi Shah Nawaz Ripon
- Faculty of Computer Sciences, Østfold University College, 1783 Halden, Norway
- Correspondence:
| | - Md. Golam Rabiul Alam
- Department of Computer Science and Engineering, School of Data and Sciences, BRAC University, 66 Mohakhali, Dhaka 1212, Bangladesh; (R.B.I.); (M.D.); (M.S.R.); (M.B.); (M.K.R.); (M.G.R.A.)
| |
Collapse
|
41
|
Paiva Proença Lobo Lopes F, Kitamura FC, Prado GF, Kuriki PEDA, Garcia MRT. Machine learning model for predicting severity prognosis in patients infected with COVID-19: Study protocol from COVID-AI Brasil. PLoS One 2021; 16:e0245384. [PMID: 33524039 PMCID: PMC7850490 DOI: 10.1371/journal.pone.0245384] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 12/28/2020] [Indexed: 12/21/2022] Open
Abstract
The new coronavirus, which began to be called SARS-CoV-2, is a single-stranded RNA beta coronavirus, initially identified in Wuhan (Hubei province, China) and currently spreading across six continents causing a considerable harm to patients, with no specific tools until now to provide prognostic outcomes. Thus, the aim of this study is to evaluate possible findings on chest CT of patients with signs and symptoms of respiratory syndromes and positive epidemiological factors for COVID-19 infection and to correlate them with the course of the disease. In this sense, it is also expected to develop specific machine learning algorithm for this purpose, through pulmonary segmentation, which can predict possible prognostic factors, through more accurate results. Our alternative hypothesis is that the machine learning model based on clinical, radiological and epidemiological data will be able to predict the severity prognosis of patients infected with COVID-19. We will perform a multicenter retrospective longitudinal study to obtain a large number of cases in a short period of time, for better study validation. Our convenience sample (at least 20 cases for each outcome) will be collected in each center considering the inclusion and exclusion criteria. We will evaluate patients who enter the hospital with clinical signs and symptoms of acute respiratory syndrome, from March to May 2020. We will include individuals with signs and symptoms of acute respiratory syndrome, with positive epidemiological history for COVID-19, who have performed a chest computed tomography. We will assess chest CT of these patients and to correlate them with the course of the disease. Primary outcomes:1) Time to hospital discharge; 2) Length of stay in the ICU; 3) orotracheal intubation;4) Development of Acute Respiratory Discomfort Syndrome. Secondary outcomes:1) Sepsis; 2) Hypotension or cardiocirculatory dysfunction requiring the prescription of vasopressors or inotropes; 3) Coagulopathy; 4) Acute Myocardial Infarction; 5) Acute Renal Insufficiency; 6) Death. We will use the AUC and F1-score of these algorithms as the main metrics, and we hope to identify algorithms capable of generalizing their results for each specified primary and secondary outcome.
Collapse
Affiliation(s)
| | - Felipe Campos Kitamura
- Departments of Radiology and Innovation, Diagnósticos da América (Dasa), São Paulo, São Paulo, Brasil
| | | | | | | | | |
Collapse
|
42
|
|
43
|
High-Resolution Traffic Sensing with Probe Autonomous Vehicles: A Data-Driven Approach. SENSORS 2021; 21:s21020464. [PMID: 33440742 PMCID: PMC7827469 DOI: 10.3390/s21020464] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 01/04/2021] [Accepted: 01/05/2021] [Indexed: 12/02/2022]
Abstract
Recent decades have witnessed the breakthrough of autonomous vehicles (AVs), and the sensing capabilities of AVs have been dramatically improved. Various sensors installed on AVs will be collecting massive data and perceiving the surrounding traffic continuously. In fact, a fleet of AVs can serve as floating (or probe) sensors, which can be utilized to infer traffic information while cruising around the roadway networks. Unlike conventional traffic sensing methods relying on fixed location sensors or moving sensors that acquire only the information of their carrying vehicle, this paper leverages data from AVs carrying sensors for not only the information of the AVs, but also the characteristics of the surrounding traffic. A high-resolution data-driven traffic sensing framework is proposed, which estimates the fundamental traffic state characteristics, namely, flow, density and speed in high spatio-temporal resolutions and of each lane on a general road, and it is developed under different levels of AV perception capabilities and for any AV market penetration rate. Experimental results show that the proposed method achieves high accuracy even with a low AV market penetration rate. This study would help policymakers and private sectors (e.g., Waymo) to understand the values of massive data collected by AVs in traffic operation and management.
Collapse
|
44
|
Arango JF, Bergasa LM, Revenga PA, Barea R, López-Guillén E, Gómez-Huélamo C, Araluce J, Gutiérrez R. Drive-By-Wire Development Process Based on ROS for an Autonomous Electric Vehicle. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20216121. [PMID: 33121213 PMCID: PMC7662766 DOI: 10.3390/s20216121] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 10/14/2020] [Accepted: 10/26/2020] [Indexed: 06/01/2023]
Abstract
This paper presents the development process of a robust and ROS-based Drive-By-Wire system designed for an autonomous electric vehicle from scratch over an open source chassis. A revision of the vehicle characteristics and the different modules of our navigation architecture is carried out to put in context our Drive-by-Wire system. The system is composed of a Steer-By-Wire module and a Throttle-By-Wire module that allow driving the vehicle by using some commands of lineal speed and curvature, which are sent through a local network from the control unit of the vehicle. Additionally, a Manual/Automatic switching system has been implemented, which allows the driver to activate the autonomous driving and safely taking control of the vehicle at any time. Finally, some validation tests were performed for our Drive-By-Wire system, as a part of our whole autonomous navigation architecture, showing the good working of our proposal. The results prove that the Drive-By-Wire system has the behaviour and necessary requirements to automate an electric vehicle. In addition, after 812 h of testing, it was proven that it is a robust Drive-By-Wire system, with high reliability. The developed system is the basis for the validation and implementation of new autonomous navigation techniques developed within the group in a real vehicle.
Collapse
|
45
|
A Taillight Matching and Pairing Algorithm for Stereo-Vision-Based Nighttime Vehicle-to-Vehicle Positioning. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10196800] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The stereo vision system has several potential benefits for delivering advanced autonomous vehicles compared to other existing technologies, such as vehicle-to-vehicle (V2V) positioning. This paper explores a stereo-vision-based nighttime V2V positioning process by detecting vehicle taillights. To address the crucial problems when applying this process to urban traffic, we propose a three-fold contribution as follows. The first contribution is a detection method that aims to label and determine the pixel coordinates of every taillight region from the images. Second, a stereo matching method derived from a gradient boosted tree is proposed to determine which taillight in the left image a taillight in the right image corresponds to. Third, we offer a neural-network-based method to pair every two taillights that belong to the same vehicle. The experiment on the four-lane traffic road was conducted, and the results were used to quantitatively evaluate the performance of each proposed method in real situations.
Collapse
|
46
|
Autonomous Vehicles: Vehicle Parameter Estimation Using Variational Bayes and Kinematics. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186317] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
On-board sensory systems in autonomous vehicles make it possible to acquire information about the vehicle itself and about its relevant surroundings. With this information the vehicle actuators are able to follow the corresponding control commands and behave accordingly. Localization is thus a critical feature in autonomous driving to define trajectories to follow and enable maneuvers. Localization approaches using sensor data are mainly based on Bayes filters. Whitebox models that are used to this end use kinematics and vehicle parameters, such as wheel radii, to interfere the vehicle’s movement. As a consequence, faulty vehicle parameters lead to poor localization results. On the other hand, blackbox models use motion data to model vehicle behavior without relying on vehicle parameters. Due to their high non-linearity, blackbox approaches outperform whitebox models but faulty behaviour such as overfitting is hardly identifiable without intensive experiments. In this paper, we extend blackbox models using kinematics, by inferring vehicle parameters and then transforming blackbox models into whitebox models. The probabilistic perspective of vehicle movement is extended using random variables representing vehicle parameters. We validated our approach, acquiring and analyzing simulated noisy movement data from mobile robots and vehicles. Results show that it is possible to estimate vehicle parameters with few kinematic assumptions.
Collapse
|
47
|
Arshad S, Sualeh M, Kim D, Nam DV, Kim GW. Clothoid: An Integrated Hierarchical Framework for Autonomous Driving in a Dynamic Urban Environment. SENSORS 2020; 20:s20185053. [PMID: 32899543 PMCID: PMC7570716 DOI: 10.3390/s20185053] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 09/01/2020] [Accepted: 09/02/2020] [Indexed: 11/16/2022]
Abstract
In recent years, research and development of autonomous driving technology have gained much interest. Many autonomous driving frameworks have been developed in the past. However, building a safely operating fully functional autonomous driving framework is still a challenge. Several accidents have been occurred with autonomous vehicles, including Tesla and Volvo XC90, resulting in serious personal injuries and death. One of the major reasons is the increase in urbanization and mobility demands. The autonomous vehicle is expected to increase road safety while reducing road accidents that occur due to human errors. The accurate sensing of the environment and safe driving under various scenarios must be ensured to achieve the highest level of autonomy. This research presents Clothoid, a unified framework for fully autonomous vehicles, that integrates the modules of HD mapping, localization, environmental perception, path planning, and control while considering the safety, comfort, and scalability in the real traffic environment. The proposed framework enables obstacle avoidance, pedestrian safety, object detection, road blockage avoidance, path planning for single-lane and multi-lane routes, and safe driving of vehicles throughout the journey. The performance of each module has been validated in K-City under multiple scenarios where Clothoid has been driven safely from the starting point to the goal point. The vehicle was one of the top five to successfully finish the autonomous vehicle challenge (AVC) in the Hyundai AVC.
Collapse
|
48
|
Kabzan J, Valls MI, Reijgwart VJF, Hendrikx HFC, Ehmke C, Prajapat M, Bühler A, Gosala N, Gupta M, Sivanesan R, Dhall A, Chisari E, Karnchanachari N, Brits S, Dangel M, Sa I, Dubé R, Gawel A, Pfeiffer M, Liniger A, Lygeros J, Siegwart R. AMZ Driverless: The full autonomous racing system. J FIELD ROBOT 2020. [DOI: 10.1002/rob.21977] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Inkyu Sa
- Autonomous Systems Lab (ASL)ETH Zürich Zurich Switzerland
| | - Renaud Dubé
- Autonomous Systems Lab (ASL)ETH Zürich Zurich Switzerland
| | - Abel Gawel
- Autonomous Systems Lab (ASL)ETH Zürich Zurich Switzerland
| | - Mark Pfeiffer
- Autonomous Systems Lab (ASL)ETH Zürich Zurich Switzerland
| | | | - John Lygeros
- Automatic Control Laboratory (IfA)ETH Zürich Zurich Switzerland
| | | |
Collapse
|
49
|
Fayyad J, Jaradat MA, Gruyer D, Najjaran H. Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review. SENSORS (BASEL, SWITZERLAND) 2020; 20:E4220. [PMID: 32751275 PMCID: PMC7436174 DOI: 10.3390/s20154220] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 07/23/2020] [Accepted: 07/24/2020] [Indexed: 01/31/2023]
Abstract
Autonomous vehicles (AV) are expected to improve, reshape, and revolutionize the future of ground transportation. It is anticipated that ordinary vehicles will one day be replaced with smart vehicles that are able to make decisions and perform driving tasks on their own. In order to achieve this objective, self-driving vehicles are equipped with sensors that are used to sense and perceive both their surroundings and the faraway environment, using further advances in communication technologies, such as 5G. In the meantime, local perception, as with human beings, will continue to be an effective means for controlling the vehicle at short range. In the other hand, extended perception allows for anticipation of distant events and produces smarter behavior to guide the vehicle to its destination while respecting a set of criteria (safety, energy management, traffic optimization, comfort). In spite of the remarkable advancements of sensor technologies in terms of their effectiveness and applicability for AV systems in recent years, sensors can still fail because of noise, ambient conditions, or manufacturing defects, among other factors; hence, it is not advisable to rely on a single sensor for any of the autonomous driving tasks. The practical solution is to incorporate multiple competitive and complementary sensors that work synergistically to overcome their individual shortcomings. This article provides a comprehensive review of the state-of-the-art methods utilized to improve the performance of AV systems in short-range or local vehicle environments. Specifically, it focuses on recent studies that use deep learning sensor fusion algorithms for perception, localization, and mapping. The article concludes by highlighting some of the current trends and possible future research directions.
Collapse
Affiliation(s)
- Jamil Fayyad
- School of Engineering, University of British Columbia, Kelowna, BC V1V 1V7, Canada;
| | - Mohammad A. Jaradat
- Department of Mechanical Engineering, American University of Sharjah, Sharjah, UAE;
- Department of Mechanical Engineering, Jordan University of Science & Technology, Irbid 22110, Jordan
| | - Dominique Gruyer
- PICS-L, COSYS, University Gustave Eiffel, IFSTTAR, 25 allée des Marronniers, 78000 Versailles, France;
| | - Homayoun Najjaran
- School of Engineering, University of British Columbia, Kelowna, BC V1V 1V7, Canada;
| |
Collapse
|
50
|
Abstract
A point cloud is a set of points defined in a 3D metric space. Point clouds have become one of the most significant data formats for 3D representation and are gaining increased popularity as a result of the increased availability of acquisition devices, as well as seeing increased application in areas such as robotics, autonomous driving, and augmented and virtual reality. Deep learning is now the most powerful tool for data processing in computer vision and is becoming the most preferred technique for tasks such as classification, segmentation, and detection. While deep learning techniques are mainly applied to data with a structured grid, the point cloud, on the other hand, is unstructured. The unstructuredness of point clouds makes the use of deep learning for its direct processing very challenging. This paper contains a review of the recent state-of-the-art deep learning techniques, mainly focusing on raw point cloud data. The initial work on deep learning directly with raw point cloud data did not model local regions; therefore, subsequent approaches model local regions through sampling and grouping. More recently, several approaches have been proposed that not only model the local regions but also explore the correlation between points in the local regions. From the survey, we conclude that approaches that model local regions and take into account the correlation between points in the local regions perform better. Contrary to existing reviews, this paper provides a general structure for learning with raw point clouds, and various methods were compared based on the general structure. This work also introduces the popular 3D point cloud benchmark datasets and discusses the application of deep learning in popular 3D vision tasks, including classification, segmentation, and detection.
Collapse
|