1
|
Boquet-Pujadas A, Olivo-Marin JC. Reformulating Optical Flow to Solve Image-Based Inverse Problems and Quantify Uncertainty. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:6125-6141. [PMID: 36040935 DOI: 10.1109/tpami.2022.3202855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
From meteorology to medical imaging and cell mechanics, many scientific domains use inverse problems (IPs) to extract physical measurements from image movement. To this end, motion estimation methods such as optical flow (OF) pre-process images into motion data to feed the IP, which then inverts for the measurements through a physical model. However, this combined OFIP pipeline exacerbates the ill-posedness inherent to each technique, propagating errors and preventing uncertainty quantification. We introduce a Bayesian PDE-constrained framework that transforms visual information directly into physical measurements in the context of probability distributions. The posterior mean is a constrained IP that tracks brightness while satisfying the physical model, thereby translating the aperture problem from the motion to the underlying physics; whereas the posterior covariance derives measurement error out of image noise. As we illustrate with traction force microscopy, our approach offers several advantages: more accurate reconstructions; unprecedented flexibility in experiment design (e.g., arbitrary boundary conditions); and the exclusivity of measurement error, central to empirical science, yet still unavailable under the OFIP strategy.
Collapse
|
2
|
Moniruzzaman MD, Rassau A, Chai D, Islam SMS. Long future frame prediction using optical flow‐informed deep neural networks for enhancement of robotic teleoperation in high latency environments. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- M. D. Moniruzzaman
- School of Engineering Edith Cowan University Joondalup Western Australia Australia
| | - Alexander Rassau
- School of Engineering Edith Cowan University Joondalup Western Australia Australia
| | - Douglas Chai
- School of Engineering Edith Cowan University Joondalup Western Australia Australia
| | | |
Collapse
|
3
|
Kashyap HJ, Fowlkes CC, Krichmar JL. Sparse Representations for Object- and Ego-Motion Estimations in Dynamic Scenes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2521-2534. [PMID: 32687472 DOI: 10.1109/tnnls.2020.3006467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Disentangling the sources of visual motion in a dynamic scene during self-movement or ego motion is important for autonomous navigation and tracking. In the dynamic image segments of a video frame containing independently moving objects, optic flow relative to the next frame is the sum of the motion fields generated due to camera and object motion. The traditional ego-motion estimation methods assume the scene to be static, and the recent deep learning-based methods do not separate pixel velocities into object- and ego-motion components. We propose a learning-based approach to predict both ego-motion parameters and object-motion field (OMF) from image sequences using a convolutional autoencoder while being robust to variations due to the unconstrained scene depth. This is achieved by: 1) training with continuous ego-motion constraints that allow solving for ego-motion parameters independently of depth and 2) learning a sparsely activated overcomplete ego-motion field (EMF) basis set, which eliminates the irrelevant components in both static and dynamic segments for the task of ego-motion estimation. In order to learn the EMF basis set, we propose a new differentiable sparsity penalty function that approximates the number of nonzero activations in the bottleneck layer of the autoencoder and enforces sparsity more effectively than L1- and L2-norm-based penalties. Unlike the existing direct ego-motion estimation methods, the predicted global EMF can be used to extract OMF directly by comparing it against the optic flow. Compared with the state-of-the-art baselines, the proposed model performs favorably on pixelwise object- and ego-motion estimation tasks when evaluated on real and synthetic data sets of dynamic scenes.
Collapse
|
4
|
Young SI, Girod B, Taubman D. Fast Optical Flow Extraction from Compressed Video. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:6409-6421. [PMID: 32286986 DOI: 10.1109/tip.2020.2985866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose the fast optical flow extractor, a filtering method that recovers artifact-free optical flow fields from HEVCcompressed video. To extract accurate optical flow fields, we form a regularized optimization problem that considers the smoothness of the solution and the pixelwise confidence weights of an artifactridden HEVC motion field. Solving such an optimization problem is slow, so we first convert the problem into a confidence-weighted filtering task. By leveraging the already-available HEVC motion parameters, we achieve a 100-fold speed-up in the running times compared to similar methods, while producing subpixel-accurate flow estimates. Je fast optical flow extractor is useful when video frames are already available in coded formats. Our method is not specific to a coder, and works with motion fields from video coders such as H.264/AVC and HEVC.
Collapse
|
5
|
Moving Object Detection Using an Object Motion Reflection Model of Motion Vectors. Symmetry (Basel) 2019. [DOI: 10.3390/sym11010034] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Moving object detection task can be solved by the background subtraction algorithm if the camera is fixed. However, because the background moves, detecting moving objects in a moving car is a difficult problem. There were attempts to detect moving objects using LiDAR or stereo cameras, but when the car moved, the detection rate decreased. We propose a moving object detection algorithm using an object motion reflection model of motion vectors. The proposed method first obtains the disparity map by searching the corresponding region between stereo images. Then, we estimate road by applying v-disparity method to the disparity map. The optical flow is used to acquire the motion vectors of symmetric pixels between adjacent frames where the road has been removed. We designed a probability model of how much the local motion is reflected in the motion vector to determine if the object is moving. We have experimented with the proposed method on two datasets, and confirmed that the proposed method detects moving objects with higher accuracy than other methods.
Collapse
|
6
|
Damon PM, Hadj-Abdelkader H, Arioui H, Youcef-Toumi K. Image-Based Lateral Position, Steering Behavior Estimation, and Road Curvature Prediction for Motorcycles. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2831260] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
7
|
Paolillo A, Gergondet P, Cherubini A, Vendittelli M, Kheddar A. Autonomous car driving by a humanoid robot. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21731] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Antonio Paolillo
- CNRS-UM LIRMM; Montpellier France
- CNRS-AIST JRL UMI3218/RL; Tsukuba Japan
| | | | | | | | | |
Collapse
|
8
|
Fridman L, Brown DE, Angell W, Abdić I, Reimer B, Noh HY. Automated synchronization of driving data using vibration and steering events. Pattern Recognit Lett 2016. [DOI: 10.1016/j.patrec.2016.02.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
9
|
Nourani-Vatani N, Borges PVK, Roberts JM, Srinivasan MV. On the Use of Optical Flow for Scene Change Detection and Description. J INTELL ROBOT SYST 2014. [DOI: 10.1007/s10846-013-9840-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
10
|
Multi-legged robot dynamics navigation model with optical flow. INTERNATIONAL JOURNAL OF INTELLIGENT UNMANNED SYSTEMS 2014. [DOI: 10.1108/ijius-04-2014-0003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
– The purpose of this paper is to establish analytical and numerical solutions of a navigational law to estimate displacements of hyper-static multi-legged mobile robots, which combines: monocular vision (optical flow of regional invariants) and legs dynamics.
Design/methodology/approach
– In this study the authors propose a Euler-Lagrange equation that control legs’ joints to control robot's displacements. Robot's rotation and translational velocities are feedback by motion features of visual invariant descriptors. A general analytical solution of a derivative navigation law is proposed for hyper-static robots. The feedback is formulated with the local speed rate obtained from optical flow of visual regional invariants. The proposed formulation includes a data association algorithm aimed to correlate visual invariant descriptors detected in sequential images through monocular vision. The navigation law is constrained by a set of three kinematic equilibrium conditions for navigational scenarios: constant acceleration, constant velocity, and instantaneous acceleration.
Findings
– The proposed data association method concerns local motions of multiple invariants (enhanced MSER) by minimizing the norm of multidimensional optical flow feature vectors. Kinematic measurements are used as observable arguments in the general dynamic control equation; while the legs joints dynamics model is used to formulate the controllable arguments.
Originality/value
– The given analysis does not combine sensor data of any kind, but only monocular passive vision. The approach automatically detects environmental invariant descriptors with an enhanced version of the MSER method. Only optical flow vectors and robot's multi-leg dynamics are used to formulate descriptive rotational and translational motions for self-positioning.
Collapse
|
11
|
|
12
|
Chen Z, Samarabandu J, Rodrigo R. Recent advances in simultaneous localization and map-building using computer vision. Adv Robot 2012. [DOI: 10.1163/156855307780132081] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Zhenhe Chen
- a University of Western Ontario, Department of Electrical and Computer Engineering, 1151 Richmond Street North, London, Ontario N6A 5B9, Canada
| | - Jagath Samarabandu
- b University of Western Ontario, Department of Electrical and Computer Engineering, 1151 Richmond Street North, London, Ontario N6A 5B9, Canada
| | - Ranga Rodrigo
- c University of Western Ontario, Department of Electrical and Computer Engineering, 1151 Richmond Street North, London, Ontario N6A 5B9, Canada
| |
Collapse
|
13
|
Abstract
This paper presents a real-time vision-based vehicle detection system employing an online boosting algorithm. It is an online AdaBoost approach for a cascade of strong classifiers instead of a single strong classifier. Most existing cascades of classifiers must be trained offline and cannot effectively be updated when online tuning is required. The idea is to develop a cascade of strong classifiers for vehicle detection that is capable of being online trained in response to changing traffic environments. To make the online algorithm tractable, the proposed system must efficiently tune parameters based on incoming images and up-to-date performance of each weak classifier. The proposed online boosting method can improve system adaptability and accuracy to deal with novel types of vehicles and unfamiliar environments, whereas existing offline methods rely much more on extensive training processes to reach comparable results and cannot further be updated online. Our approach has been successfully validated in real traffic environments by performing experiments with an onboard charge-coupled-device camera in a roadway vehicle.
Collapse
Affiliation(s)
- Wen-Chung Chang
- Department of Electrical Engineering, National Taipei University of Technology, Taipei 106, Taiwan.
| | | |
Collapse
|
14
|
Bhattacharyya S, Maulik U, Dutta P. High-speed target tracking by fuzzy hostility-induced segmentation of optical flow field. Appl Soft Comput 2009. [DOI: 10.1016/j.asoc.2008.03.012] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
|
16
|
Moving Object Segmentation Using Optical Flow and Depth Information. ADVANCES IN IMAGE AND VIDEO TECHNOLOGY 2009. [DOI: 10.1007/978-3-540-92957-4_53] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
17
|
Lefèvre J, Baillet S. Optical flow and advection on 2-Riemannian manifolds: a common framework. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2008; 30:1081-1092. [PMID: 18421112 DOI: 10.1109/tpami.2008.51] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Dynamic pattern analysis and motion extraction can be efficiently addressed using optical flow techniques. This article presents a generalization of these questions to non-flat surfaces, where optical flow is tackled through the problem of evolution processes on non-Euclidian domains. The classical equations of optical flow in the Euclidian case are transposed to the theoretical framework of differential geometry. We adopt this formulation for the regularized optical flow problem, prove its mathematical well-posedness and combine it with the advection equation. The optical flow and advection problems are dual: a motion field may be retrieved from some scalar evolution using optical flow; conversely, a scalar field may be deduced from a velocity field using advection. These principles are illustrated with qualitative and quantitative evaluations from numerical simulations bridging both approaches. The proof-of-concept is further demonstrated with preliminary results from time-resolved functional brain imaging data, where organized propagations of cortical activation patterns are evidenced using our approach.
Collapse
Affiliation(s)
- Julien Lefèvre
- Cognitive Neuroscience and Brain Imaging Lab, CNRS, Paris, France.
| | | |
Collapse
|
18
|
Kim SY, Kang JK, Oh SY, Ryu YW, Kim K, Park SC, Kim J. An Intelligent and Integrated Driver Assistance System for Increased Safety and Convenience Based on All-around Sensing. J INTELL ROBOT SYST 2007. [DOI: 10.1007/s10846-007-9187-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
19
|
Pauwels K, Lappe M, Van Hulle MM. Fixation as a Mechanism for Stabilization of Short Image Sequences. Int J Comput Vis 2007. [DOI: 10.1007/s11263-006-8893-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
20
|
|
21
|
Sun Z, Bebis G, Miller R. On-road vehicle detection: a review. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2006; 28:694-711. [PMID: 16640257 DOI: 10.1109/tpami.2006.104] [Citation(s) in RCA: 109] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research.
Collapse
Affiliation(s)
- Zehang Sun
- eTreppid Technologies LLC, Reno, NV 89521, USA.
| | | | | |
Collapse
|
22
|
Abstract
Muscle contraction is usually measured and characterized with force and displacement transducers. The contraction of muscle fibers, however, evokes in the tissue a two and even three-dimensional displacement field, which is not properly quantified by these transducers because they provide just a single scalar quantity. This problem can be circumvented by using optical measurements and standard tools of computer vision, developed for the analysis of time varying image sequences. By computing the so called optical flow, i.e. the apparent motion of points in a time varying image sequence, it is possible to recover a two-dimensional motion field, describing rather precisely the displacement caused by muscle contraction in a flattened piece of skin. The obtained two-dimensional optical flow can be further analyzed by computing its elementary deformation components, providing a novel and accurate characterization of the contraction induced by different motoneurons. This technique is demonstrated analyzing the displacement caused by muscle contraction in the skin of the leech, Hirudo medicinalis. The proposed technique can be applied to monitor and characterize all contractions in almost flat tissues with enough visual texture.
Collapse
Affiliation(s)
- D Zoccolan
- Scuola Internazionale Superiore di Studi Avanzati, Via Beirut 2, Trieste, Italy
| | | | | |
Collapse
|