1
|
Abstract
Localization and mapping technologies are of great importance for all varieties of Unmanned Aerial Vehicles (UAVs) to perform their operations. In the near future, it is planned to increase the use of micro/nano-size UAVs. Such vehicles are sometimes expendable platforms, and reuse may not be possible. Compact, mounted and low-cost cameras are preferred in these UAVs due to weight, cost and size limitations. Visual simultaneous localization and mapping (vSLAM) methods are used for providing situational awareness of micro/nano-size UAVs. Fast rotational movements that occur during flight with gimbal-free, mounted cameras cause motion blur. Above a certain level of motion blur, tracking losses exist, which causes vSLAM algorithms not to operate effectively. In this study, a novel vSLAM framework is proposed that prevents the occurrence of tracking losses in micro/nano-UAVs due to the motion blur. In the proposed framework, the blur level of the frames obtained from the platform camera is determined and the frames whose focus measure score is below the threshold are restored by specific motion-deblurring methods. The major reasons of tracking losses have been analyzed with experimental studies, and vSLAM algorithms have been made durable by our studied framework. It has been observed that our framework can prevent tracking losses at 5, 10 and 20 fps processing speeds. vSLAM algorithms continue to normal operations at those processing speeds that have not been succeeded before using standard vSLAM algorithms, which can be considered as a superiority of our study.
Collapse
|
2
|
Wu Q, Wang X, Chen B, Wu H. Patient-Active Control of a Powered Exoskeleton Targeting Upper Limb Rehabilitation Training. Front Neurol 2018; 9:817. [PMID: 30364274 PMCID: PMC6193099 DOI: 10.3389/fneur.2018.00817] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2018] [Accepted: 09/10/2018] [Indexed: 12/22/2022] Open
Abstract
Robot-assisted therapy affords effective advantages to the rehabilitation training of patients with motion impairment problems. To meet the challenge of integrating the active participation of a patient in robotic training, this study presents an admittance-based patient-active control scheme for real-time intention-driven control of a powered upper limb exoskeleton. A comprehensive overview is proposed to introduce the major mechanical structure and the real-time control system of the developed therapeutic robot, which provides seven actuated degrees of freedom and achieves the natural ranges of human arm movement. Moreover, the dynamic characteristics of the human-exoskeleton system are studied via a Lagrangian method. The patient-active control strategy consisting of an admittance module and a virtual environment module is developed to regulate the robot configurations and interaction forces during rehabilitation training. An audiovisual game-like interface is integrated into the therapeutic system to encourage the voluntary efforts of the patient and recover the neural plasticity of the brain. Further experimental investigation, involving a position tracking experiment, a free arm training experiment, and a virtual airplane-game operation experiment, is conducted with three healthy subjects and eight hemiplegic patients with different motor abilities. Experimental results validate the feasibility of the proposed scheme in providing patient-active rehabilitation training.
Collapse
Affiliation(s)
- Qingcong Wu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Xingsong Wang
- College of Mechanical Engineering, Southeast University, Nanjing, China
| | - Bai Chen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Hongtao Wu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
3
|
Ireta Muñoz FI, Comport AI. Point-to-hyperplane ICP: fusing different metric measurements for pose estimation. Adv Robot 2018. [DOI: 10.1080/01691864.2018.1434013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
4
|
|
5
|
Abstract
Predicting depth from a single image is an important problem for understanding the 3-D geometry of a scene. Recently, the nonparametric depth sampling (DepthTransfer) has shown great potential in solving this problem, and its two key components are a Scale Invariant Feature Transform (SIFT) flow–based depth warping between the input image and its retrieved similar images and a pixel-wise depth fusion from all warped depth maps. In addition to the inherent heavy computational load in the SIFT flow computation even under a coarse-to-fine scheme, the fusion reliability is also low due to the low discriminativeness of pixel-wise description nature. This article aims at solving these two problems. First, a novel sparse SIFT flow algorithm is proposed to reduce the complexity from subquadratic to sublinear. Then, a reweighting technique is introduced where the variance of the SIFT flow descriptor is computed at every pixel and used for reweighting the data term in the conditional Markov random fields. Our proposed depth transfer method is tested on the Make3D Range Image Data and NYU Depth Dataset V2. It is shown that, with comparable depth estimation accuracy, our method is 2–3 times faster than the DepthTransfer.
Collapse
|
6
|
Aqel MOA, Marhaban MH, Saripan MI, Ismail NB. Review of visual odometry: types, approaches, challenges, and applications. SPRINGERPLUS 2016; 5:1897. [PMID: 27843754 PMCID: PMC5084145 DOI: 10.1186/s40064-016-3573-7] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 10/18/2016] [Indexed: 11/21/2022]
Abstract
Accurate localization of a vehicle is a fundamental challenge and one of the most important tasks of mobile robots. For autonomous navigation, motion tracking, and obstacle detection and avoidance, a robot must maintain knowledge of its position over time. Vision-based odometry is a robust technique utilized for this purpose. It allows a vehicle to localize itself robustly by using only a stream of images captured by a camera attached to the vehicle. This paper presents a review of state-of-the-art visual odometry (VO) and its types, approaches, applications, and challenges. VO is compared with the most common localization sensors and techniques, such as inertial navigation systems, global positioning systems, and laser sensors. Several areas for future research are also highlighted.
Collapse
Affiliation(s)
- Mohammad O A Aqel
- Department of Engineering, Faculty of Engineering and Information Technology, Al-Azhar University-Gaza, Gaza, Palestine
| | - Mohammad H Marhaban
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Selangor Malaysia
| | - M Iqbal Saripan
- Department of Computer and Communication Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Selangor Malaysia
| | - Napsiah Bt Ismail
- Department of Mechanical and Manufacturing Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Selangor Malaysia
| |
Collapse
|
7
|
Qian K, Ma X, Fang F, Dai X, Zhou B. Mobile robot self-localization in unstructured environments based on observation localizability estimation with low-cost laser range-finder and RGB-D sensors. INT J ADV ROBOT SYST 2016. [DOI: 10.1177/1729881416670902] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
When service robots work in human environments, unexpected and unknown moving people may deteriorate the convergence of robot localization or even cause failure localization if the environment is crowded. In this article, a multisensor observation localizability estimation method is proposed and implemented for supporting reliable robot localization in unstructured environments with low-cost sensors. The contribution of the approach is a strategy that combines noisy laser range-finder data and RGB-D data for estimating the dynamic localizability matrix in a probabilistic framework. By aligning two sensor frames, the unreliable part of the laser readings that hits unexpected moving people is fast extracted according to the output of a RGB-D-based human detector, so that the influence of unexpected moving people on laser observations can be explicitly factored out. The method is easy for implementation and is highly desirable to ensure robustness and real-time performance for long-term operation in populated environments. Comparative experiments are conducted and the results confirm the effectiveness and reliability of the proposed method in improving the localization accuracy and reliability in dynamic environments.
Collapse
Affiliation(s)
- Kun Qian
- School of Automation, Southeast University, Nanjing, China
| | - Xudong Ma
- School of Automation, Southeast University, Nanjing, China
| | - Fang Fang
- School of Automation, Southeast University, Nanjing, China
| | - Xianzhong Dai
- School of Automation, Southeast University, Nanjing, China
| | - Bo Zhou
- School of Automation, Southeast University, Nanjing, China
| |
Collapse
|
8
|
Kriechbaumer T, Blackburn K, Breckon TP, Hamilton O, Casado MR. Quantitative Evaluation of Stereo Visual Odometry for Autonomous Vessel Localisation in Inland Waterway Sensing Applications. SENSORS 2015; 15:31869-87. [PMID: 26694411 PMCID: PMC4721811 DOI: 10.3390/s151229892] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2015] [Revised: 12/08/2015] [Accepted: 12/09/2015] [Indexed: 11/16/2022]
Abstract
Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS). In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames) and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m) is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring.
Collapse
Affiliation(s)
- Thomas Kriechbaumer
- School of Energy, Environmental Technology and Agrifood, Cranfield University, Cranfield MK43 0AL, UK.
| | - Kim Blackburn
- School of Aerospace, Transport Systems and Manufacturing, Cranfield University, Cranfield MK43 0AL, UK.
| | - Toby P Breckon
- School of Engineering and Computing Sciences, Durham University, Durham DH1 3LE, UK.
| | - Oliver Hamilton
- School of Engineering and Computing Sciences, Durham University, Durham DH1 3LE, UK.
| | - Monica Rivas Casado
- School of Energy, Environmental Technology and Agrifood, Cranfield University, Cranfield MK43 0AL, UK.
| |
Collapse
|