1
|
Bauer A, Hartmann C. Spatio-Trajectorial Optical Flow for Higher-Order Deformation Analysis in Solid Experimental Mechanics. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094408. [PMID: 37177611 PMCID: PMC10181659 DOI: 10.3390/s23094408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 04/24/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023]
Abstract
Material models are required to solve continuum mechanical problems. These models contain parameters that are usually determined by application-specific test setups. In general, the theoretically developed models and, thus, the parameters to be determined become increasingly complex, e.g., incorporating higher-order motion derivatives, such as the strain or strain rate. Therefore, the strain rate behaviour needs to be extracted from experimental data. Using image data, the most-common way in solid experimental mechanics to do so is digital image correlation. Alternatively, optical flow methods, which allow an adaption to the underlying motion estimation problem, can be applied. In order to robustly estimate the strain rate fields, an optical flow approach implementing higher-order spatial and trajectorial regularisation is proposed. Compared to using a purely spatial variational approach of higher order, the proposed approach is capable of calculating more accurate displacement and strain rate fields. The procedure is finally demonstrated on experimental data of a shear cutting experiment, which exhibited complex deformation patterns under difficult optical conditions.
Collapse
Affiliation(s)
- Anna Bauer
- Chair of Metal Forming and Casting, Technical University of Munich, Walther-Meissner-Strasse 4, 85748 Garching, Germany
| | - Christoph Hartmann
- Chair of Metal Forming and Casting, Technical University of Munich, Walther-Meissner-Strasse 4, 85748 Garching, Germany
| |
Collapse
|
2
|
Abstract
Computer algorithms have been developed for early vision processes that give separate cues to the distance from the viewer of three-dimensional surfaces, their shape, and their material properties. The MIT Vision Machine is a computer system that integrates several early vision modules to achieve high-performance rec ognition and navigation in unstructured environments. It is also an experimental environment for theoretical progress in early vision algorithms, their parallel imple mentation, and their integration. The Vision Machine consists of a movable, two-camera Eye-Head input de vice and an 8K Connection Machine. We have developed and implemented several parallel early vision algorithms that compute edge detection, stereopsis, motion, texture, and surface color in close to real time. The integration stage, based on coupled Markov random field models, leads to a cartoon-like map of the discontinuities in the scene, with partial labeling of the brightness edges in terms of their physical origin.
Collapse
Affiliation(s)
- James J. Little
- MASSACHUSETTS INSTITUTE OF TECHNOLOGY CAMBRIDGE, MASSACHUSETTS
02139
| | - Tomaso Poggio
- MASSACHUSETTS INSTITUTE OF TECHNOLOGY CAMBRIDGE, MASSACHUSETTS
02139
| | - Edward B. Gamble
- MASSACHUSETTS INSTITUTE OF TECHNOLOGY CAMBRIDGE, MASSACHUSETTS
02139
| |
Collapse
|
3
|
|
4
|
Chen J, Zhao G, Salo M, Rahtu E, Pietikäinen M. Automatic dynamic texture segmentation using local descriptors and optical flow. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:326-339. [PMID: 22851258 DOI: 10.1109/tip.2012.2210234] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
A dynamic texture (DT) is an extension of the texture to the temporal domain. How to segment a DT is a challenging problem. In this paper, we address the problem of segmenting a DT into disjoint regions. A DT might be different from its spatial mode (i.e., appearance) and/or temporal mode (i.e., motion field). To this end, we develop a framework based on the appearance and motion modes. For the appearance mode, we use a new local spatial texture descriptor to describe the spatial mode of the DT; for the motion mode, we use the optical flow and the local temporal texture descriptor to represent the temporal variations of the DT. In addition, for the optical flow, we use the histogram of oriented optical flow (HOOF) to organize them. To compute the distance between two HOOFs, we develop a simple effective and efficient distance measure based on Weber's law. Furthermore, we also address the problem of threshold selection by proposing a method for determining thresholds for the segmentation method by an offline supervised statistical learning. The experimental results show that our method provides very good segmentation results compared to the state-of-the-art methods in segmenting regions that differ in their dynamics.
Collapse
Affiliation(s)
- Jie Chen
- Department of Computer Science and Engineering, Center for Machine Vision Research, University of Oulu, Oulu, Finland.
| | | | | | | | | |
Collapse
|
5
|
|
6
|
|
7
|
|
8
|
Feldman D, Weinshall D. Motion segmentation and depth ordering using an occlusion detector. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2008; 30:1171-1185. [PMID: 18550901 DOI: 10.1109/tpami.2007.70766] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
We present a novel method for motion segmentation and depth ordering from a video sequence in general motion. We first compute motion segmentation based on differential properties of the spatio-temporal domain, and scale-space integration. Given a motion boundary, we describe two algorithms to determine depth ordering from two- and three- frame sequences. An remarkable characteristic of our method is its ability compute depth ordering from only two frames. The segmentation and depth ordering algorithms are shown to give good results on 6 real sequences taken in general motion. We use synthetic data to show robustness to high levels of noise and illumination changes; we also include cases where no intensity edge exists at the location of the motion boundary, or when no parametric motion model can describe the data. Finally, we describe human experiments showing that people, like our algorithm, can compute depth ordering from only two frames, even when the boundary between the layers is not visible in a single frame.
Collapse
Affiliation(s)
- Doron Feldman
- School of Computer Science and Engineering, Hebrew University of Jerusalem, Jerusalem, Israel.
| | | |
Collapse
|
9
|
Sand P, Teller S. Particle Video: Long-Range Motion Estimation Using Point Trajectories. Int J Comput Vis 2008. [DOI: 10.1007/s11263-008-0136-6] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
10
|
Jodoin PM, Mignotte M, Rosenberger C. Segmentation framework based on label field fusion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:2535-2550. [PMID: 17926935 DOI: 10.1109/tip.2007.903841] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this paper, we put forward a novel fusion framework that mixes together label fields instead of observation data as is usually the case. Our framework takes as input two label fields: a quickly estimated and to-be-refined segmentation map and a spatial region map that exhibits the shape of the main objects of the scene. These two label fields are fused together with a global energy function that is minimized with a deterministic iterative conditional mode algorithm. As explained in the paper, the energy function may implement a pure fusion strategy or a fusion-reaction function. In the latter case, a data-related term is used to make the optimization problem well posed. We believe that the conceptual simplicity, the small number of parameters, the use of a simple and fast deterministic optimizer that admits a natural implementation on a parallel architecture are among the main advantages of our approach. Our fusion framework is adapted to various computer vision applications among which are motion segmentation, motion estimation and occlusion detection.
Collapse
Affiliation(s)
- Pierre-Marc Jodoin
- Département d'informatique, Université de Sherbrooke, Sherbrooke QC J1K 2R1, Canada.
| | | | | |
Collapse
|
11
|
|
12
|
Joint estimation-segmentation of optic flow. ACTA ACUST UNITED AC 2006. [DOI: 10.1007/bfb0054765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
13
|
|
14
|
Detection and tracking of moving objects based on a statistical regularization method in space and time. ACTA ACUST UNITED AC 2005. [DOI: 10.1007/bfb0014877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
15
|
|
16
|
|
17
|
|
18
|
Patras I, Worring M, van den Boomgaard R. Dense motion estimation using regularization constraints on local parametric models. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2004; 13:1432-1443. [PMID: 15540453 DOI: 10.1109/tip.2004.836179] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
Collapse
Affiliation(s)
- Ioannis Patras
- Intelligent Sensory Information Systems Group, Computer Science Institute, University of Amsterdam, The Netherlands.
| | | | | |
Collapse
|
19
|
Stocker A. Analog VLSI Focal-Plane Array With Dynamic Connections for the Estimation of Piecewise-Smooth Optical Flow. ACTA ACUST UNITED AC 2004. [DOI: 10.1109/tcsi.2004.827619] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
20
|
Smith P, Drummond T, Cipolla R. Layered motion segmentation and depth ordering by tracking edges. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2004; 26:479-494. [PMID: 15382652 DOI: 10.1109/tpami.2004.1265863] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This paper presents a new Bayesian framework for motion segmentation--dividing a frame from an image sequence into layers representing different moving objects--by tracking edges between frames. Edges are found using the Canny edge detector, and the Expectation-Maximization algorithm is then used to fit motion models to these edges and also to calculate the probabilities of the edges obeying each motion model. The edges are also used to segment the image into regions of similar color. The most likely labeling for these regions is then calculated by using the edge probabilities, in association with a Markov Random Field-style prior. The identification of the relative depth ordering of the different motion layers is also determined, as an integral part of the process. An efficient implementation of this framework is presented for segmenting two motions (foreground and background) using two frames. It is then demonstrated how, by tracking the edges into further frames, the probabilities may be accumulated to provide an even more accurate and robust estimate, and segment an entire sequence. Further extensions are then presented to address the segmentation of more than two motions. Here, a hierarchical method of initializing the Expectation-Maximization algorithm is described, and it is demonstrated that the Minimum Description Length principle may be used to automatically select the best number of motion layers. The results from over 30 sequences (demonstrating both two and three motions) are presented and discussed.
Collapse
Affiliation(s)
- Paul Smith
- Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK.
| | | | | |
Collapse
|
21
|
Mansouri AR, Konrad J. Multiple motion segmentation with level sets. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2003; 12:201-220. [PMID: 18237901 DOI: 10.1109/tip.2002.807582] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Segmentation of motion in an image sequence is one of the most challenging problems in image processing, while at the same time one that finds numerous applications. To date, a wealth of approaches to motion segmentation have been proposed. Many of them suffer from the local nature of the models used. Global models, such as those based on Markov random fields, perform, in general, better. In this paper, we propose a new approach to motion segmentation that is based on a global model. The novelty of the approach is twofold. First, inspired by recent work of other researchers we formulate the problem as that of region competition, but we solve it using the level set methodology. The key features of a level set representation, as compared to active contours, often used in this context, are its ability to handle variations in the topology of the segmentation and its numerical stability. The second novelty of the paper is the formulation in which, unlike in many other motion segmentation algorithms, we do not use intensity boundaries as an accessory; the segmentation is purely based on motion. This permits accurate estimation of motion boundaries of an object even when its intensity boundaries are hardly visible. Since occasionally intensity boundaries may prove beneficial, we extend the formulation to account for the coincidence of motion and intensity boundaries. In addition, we generalize the approach to multiple motions. We discuss possible discretizations of the evolution (PDE) equations and we give details of an initialization scheme so that the results could be duplicated. We show numerous experimental results for various formulations on natural images with either synthetic or natural motion.
Collapse
|
22
|
Gibson D, Spann M. Robust optical flow estimation based on a sparse motion trajectory set. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2003; 12:431-445. [PMID: 18237921 DOI: 10.1109/tip.2003.811628] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper presents an approach to the problem of estimating a dense optical flow field. The approach is based on a multiframe, irregularly spaced motion trajectory set, where each trajectory describes the motion of a given point as a function of time. From this motion trajectory set a dense flow field is estimated using a process of interpolation. A set of localized motion models are estimated, with each pixel labeled as belonging to one of the motion models. A Markov random field framework is adopted, allowing the incorporation of contextual constraints to encourage region-like structures. The approach is compared with a number of conventional optical flow estimation algorithms taken over a number of real and synthetic sequences. Results indicate that the method produces more accurate results for sequences with known ground truth flow. Also, applying the method to real sequences with unknown flow results in lower DFD, for all of the sequences tested.
Collapse
Affiliation(s)
- David Gibson
- Sch. of Electron. and Electr. Eng., Univ. of Birmingham, UK.
| | | |
Collapse
|
23
|
Kim EY, Hyun Park S, Won Hwang S, Joon Kim H. Video sequence segmentation using genetic algorithms. Pattern Recognit Lett 2002. [DOI: 10.1016/s0167-8655(01)00160-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
24
|
LEE RAYMONDST, LIU JAMESNK. SCENOGRAM — SCENE ANALYSIS USING COMPOSIT ENEURAL OSCILLATORY-BASED ELASTIC GRAPH MODEL. INT J PATTERN RECOGN 2002. [DOI: 10.1142/s0218001402001587] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Scene analysis is so far one of the most important topics in machine vision. In this paper, we present an integrated scene analysis model, namely SCENOGRAM (Scene analysis using CompositENeural Oscillatory-based elastic GRAph Model). Basically the proposed scene analyzer is based on the integration of the composite neural oscillatory model with our elastic graph dynamic link model. The system involves: (1) multifrequency bands feature extraction scheme using Gabor filters, (2) automatic figure-ground object segmentation using a composite neural oscillatory model, and (3) object matching using an elastic graph dynamic link model. From the implementation point of view, we introduce an intelligent agent based scene analysis and object identification solution using the SCENOGRAM technology. From the experimental point of view, a scene gallery of over 6000 color scene images is used for automatic scene segmentation testing and object identification test. An overall correct invariant facial recognition rate of over 87% is attained. It is anticipated that the implementation of the SCENOGRAM can provide an invariant and higher-order intelligent object (pattern) encoding, searching and identification solution for future intelligent e-Business.
Collapse
Affiliation(s)
- RAYMOND S. T. LEE
- Department of Computing, Hong Kong Polytechnic University, Hong Kong, China
| | - JAMES N. K. LIU
- Department of Computing, Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
25
|
Li L, Leung MKH. Integrating intensity and texture differences for robust change detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2002; 11:105-112. [PMID: 18244616 DOI: 10.1109/83.982818] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We propose a novel technique for robust change detection based upon the integration of intensity and texture differences between two frames. A new accurate texture difference measure based on the relations between gradient vectors is proposed. The mathematical analysis shows that the measure is robust with respect to noise and illumination changes. Two ways to integrate the intensity and texture differences have been developed. The first combines the two measures adaptively according to the weightage of texture evidence, while the second does it optimally with additional constraint of smoothness. The parameters of the algorithm are selected automatically based on a statistic analysis. An algorithm is developed for fast implementation. The computational complexity analysis indicates that the proposed technique can run in real-time. The experiment results are evaluated both visually and quantitatively. They show that by exploiting both intensity and texture differences for change detection, one can obtain much better segmentation results than using the intensity or structure difference alone.
Collapse
|
26
|
|
27
|
Dumontier C, Luthon F, Charras JP. Real-time DSP implementation for MRF-based video motion detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:1341-1347. [PMID: 18267406 DOI: 10.1109/83.791960] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper describes the real time implementation of a simple and robust motion detection algorithm based on Markov random field (MRF) modeling, MRF-based algorithms often require a significant amount of computations. The intrinsic parallel property of MRF modeling has led most of implementations toward parallel machines and neural networks, but none of these approaches offers an efficient solution for real-world (i.e., industrial) applications. Here, an alternative implementation for the problem at hand is presented yielding a complete, efficient and autonomous real-time system for motion detection. This system is based on a hybrid architecture, associating pipeline modules with one asynchronous module to perform the whole process, from video acquisition to moving object masks visualization. A board prototype is presented and a processing rate of 15 images/s is achieved, showing the validity of the approach.
Collapse
Affiliation(s)
- C Dumontier
- Signal and Image Lab., Nat. Polytech. Inst., Grenoble, France
| | | | | |
Collapse
|
28
|
Aghajan HK, Khalaj BH, Kailath T. Estimation of multiple 2-D uniform motions by SLIDE: subspace-based line detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:517-526. [PMID: 18262895 DOI: 10.1109/83.753739] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
A technique is proposed for estimating the parameters of two-dimensional (2-D) uniform motion of multiple moving objects in a scene, based on long-sequence image processing and the application of a multiline fitting algorithm. Plots of the vertical and horizontal projections versus frame number give new images in which uniformly moving objects are represented by skewed band regions, with the angles of the skew from the vertical being a measure of the velocities of the moving objects. For example, vertical bands will correspond to objects with zero velocity. An algorithm called subspace-based line detection (SLIDE) can be used to efficiently determine the skew angles. SLIDE exploits the temporal coherence between the contributions of each of the moving patterns in the frame projections to enhance and distinguish a signal subspace that is defined by the desired motion parameters. A similar procedure can be used to determine the vertical velocities. Some further steps must then be taken to properly associate the horizontal and vertical velocities.
Collapse
Affiliation(s)
- H K Aghajan
- Schlumberger Technol., San Jose, CA 95110, USA
| | | | | |
Collapse
|
29
|
Black MJ, Sapiro G, Marimont DH, Heeger D. Robust anisotropic diffusion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1998; 7:421-432. [PMID: 18276262 DOI: 10.1109/83.661192] [Citation(s) in RCA: 200] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edge-stopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edge-stopping" function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in an image that has been smoothed with anisotropic diffusion. Additionally, we derive a relationship between anisotropic diffusion and regularization with line processes. Adding constraints on the spatial organization of the line processes allows us to develop new anisotropic diffusion equations that result in a qualitative improvement in the continuity of edges.
Collapse
Affiliation(s)
- M J Black
- Xerox Palo Alto Res. Center, CA 94304, USA
| | | | | | | |
Collapse
|
30
|
Borş AG, Pitas I. Optical flow estimation and moving object segmentation based on median radial basis function network. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1998; 7:693-702. [PMID: 18276285 DOI: 10.1109/83.668026] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Various approaches have been proposed for simultaneous optical flow estimation and segmentation in image sequences. In this study, the moving scene is decomposed into different regions with respect to their motion, by means of a pattern recognition scheme. The inputs of the proposed scheme are the feature vectors representing still image and motion information. Each class corresponds to a moving object. The classifier employed is the median radial basis function (MRBF) neural network. An error criterion function derived from the probability estimation theory and expressed as a function of the moving scene model is used as the cost function. Each basis function is activated by a certain image region. Marginal median and median of the absolute deviations from the median (MAD) estimators are employed for estimating the basis function parameters. The image regions associated with the basis functions are merged by the output units in order to identify moving objects.
Collapse
Affiliation(s)
- A G Borş
- Department of Informatics, University of Thessaloniki, Thessaloniki, Greece.
| | | |
Collapse
|
31
|
|
32
|
Stiller C. Object-based estimation of dense motion fields. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1997; 6:234-250. [PMID: 18282920 DOI: 10.1109/83.551695] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Motion estimation belongs to key techniques in image sequence processing. Segmentation of the motion fields such that, ideally, each independently moving object uniquely corresponds to one region, is one of the essential elements in object-based image processing. This paper is concerned with unsupervised simultaneous estimation of dense motion fields and their segmentations. It is based on a stochastic model relating image intensities to motion information. Based on the analysis of natural images, a region-based model of motion-compensated prediction error is proposed. In each region the error is modeled by a white stationary generalized Gaussian random process. The motion field and its segmentation are themselves modeled by a compound Gibbs/Markov random field accounting for statistical bindings in spatial direction and along the direction of motion trajectories. The a posteriori distribution of the motion field for a given image sequence is formulated as an objective function, such that its maximization results in the MAP estimate. A deterministic multiscale relaxation technique with regular structure is employed for optimization of the objective function. Simulation results are in a good agreement with human perception for both the motion fields and their segmentations.
Collapse
Affiliation(s)
- C Stiller
- Corp. Res. and Dev. Robert Bosch GmbH, Hildesheim
| |
Collapse
|
33
|
Chang MM, Tekalp AM, Sezan MI. Simultaneous motion estimation and segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1997; 6:1326-1333. [PMID: 18283022 DOI: 10.1109/83.623196] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We present a Bayesian framework that combines motion (optical flow) estimation and segmentation based on a representation of the motion field as the sum of a parametric field and a residual field. The parameters describing the parametric component are found by a least squares procedure given the best estimates of the motion and segmentation fields. The motion field is updated by estimating the minimum-norm residual field given the best estimate of the parametric field, under the constraint that motion field be smooth within each segment. The segmentation field is updated to yield the minimum-norm residual field given the best estimate of the motion field, using Gibbsian priors. The solution to successive optimization problems are obtained using the highest confidence first (HCF) or iterated conditional mode, (ICM) optimization methods. Experimental results on real video are shown.
Collapse
Affiliation(s)
- M M Chang
- Dept. of Electr. Eng., Rochester Univ., NY
| | | | | |
Collapse
|
34
|
Computation and analysis of image motion: A synopsis of current problems and methods. Int J Comput Vis 1996. [DOI: 10.1007/bf00131147] [Citation(s) in RCA: 71] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
35
|
Black MJ, Rangarajan A. On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. Int J Comput Vis 1996. [DOI: 10.1007/bf00131148] [Citation(s) in RCA: 156] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
36
|
A Variational Approach to the Design of Early Vision Algorithms. ACTA ACUST UNITED AC 1996. [DOI: 10.1007/978-3-7091-6586-7_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
37
|
Abstract
Psychometric functions for motion detection were measured for various spatial velocity profiles made of independently moving lines of random dots. In the first experiment, sensitivity was greater for square-wave velocity profiles than for sine waves of the same fundamental spatial frequency. Sensitivity for square waves depended on the phase of the waveform with respect to the fixation point, which precludes a characterization of the processes underlying the detection of shearing motion as a translation-invariant system. The second experiment, using velocity fields created by spatial super-position of sine waves, showed that motion boundaries facilitate detection of motion due to the steepness of the velocity gradient, and not simply because of added power at higher harmonics. In the third experiment, fluted velocity waveforms were created by subtracting the fundamental sinusoidal component from square waves, retaining sharp motion boundaries between opposing directions but removing the regions of uniform motion. Subtracting the fundamental from low-frequency square waves did not lower sensitivity to motion, indicating that sensitivity was largely determined by the presence of motion boundaries. In the final section of this article, a model is presented that can account for the data by using linear center-surround velocity mechanisms whose sizes increase with eccentricity while their sensitivity for shearing motion decreases.
Collapse
Affiliation(s)
- W L Sachtler
- Department of Psychology, Columbia University, New York 10027, USA
| | | |
Collapse
|
38
|
Schnörr C, Sprengel R. A nonlinear regularization approach to early vision. BIOLOGICAL CYBERNETICS 1994; 72:141-149. [PMID: 7880918 DOI: 10.1007/bf00205978] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
We propose a new class of approaches to smooth visual data while preserving significant transitions of these data as clues for segmentation. Formally, the given visual data are represented as a noisy (image) function g, and we present a class of continuously formulated global minimization problems to smooth g. The resulting function u can be characterized as the minimizer of a specific nonquadratic functional or, equivalently, as the result of an associated nonlinear diffusion process. Our approach generalizes the well-known quadratic regularization principle while retaining its attractive properties: For any given g, the solution u to the proposed minimization problem is unique and depends continuously on the data g. Furthermore, convergence of approximate solutions obtained by finite element discretization holds true. We show that the nodal variables of any chosen finite element subspace can be interpreted as computational units whose activation dynamics due to the nonlinear smoothing process evolve like a globally asymptotically stable network. A corresponding analogue implementation is thus feasible and would provide a real time processing stage for the transition preserving smoothing of visual data. Using artificial as well as real data we illustrate our approach by numerical examples. We demonstrate that solutions to our approach improve those obtained by quadratic minimization and show the influence of global parameters which allow for a continuous, scale-dependent, and selective control of the smoothing process.
Collapse
Affiliation(s)
- C Schnörr
- Universität Hamburg, FB Informatik, AB Kognitive Systeme, Germany
| | | |
Collapse
|
39
|
|
40
|
Luettgen MR, Clem Karl W, Willsky AS. Efficient multiscale regularization with applications to the computation of optical flow. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1994; 3:41-64. [PMID: 18291908 DOI: 10.1109/83.265979] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
A new approach to regularization methods for image processing is introduced and developed using as a vehicle the problem of computing dense optical flow fields in an image sequence. The solution of the new problem formulation is computed with an efficient multiscale algorithm. Experiments on several image sequences demonstrate the substantial computational savings that can be achieved due to the fact that the algorithm is noniterative and in fact has a per pixel computational complexity that is independent of image size. The new approach also has a number of other important advantages. Specifically, multiresolution flow field estimates are available, allowing great flexibility in dealing with the tradeoff between resolution and accuracy. Multiscale error covariance information is also available, which is of considerable use in assessing the accuracy of the estimates. In particular, these error statistics can be used as the basis for a rational procedure for determining the spatially-varying optimal reconstruction resolution. Furthermore, if there are compelling reasons to insist upon a standard smoothness constraint, the new algorithm provides an excellent initialization for the iterative algorithms associated with the smoothness constraint problem formulation. Finally, the usefulness of the approach should extend to a wide variety of ill-posed inverse problems in which variational techniques seeking a "smooth" solution are generally used.
Collapse
|
41
|
Bouthemy P, Francois E. Motion segmentation and qualitative dynamic scene analysis from an image sequence. Int J Comput Vis 1993. [DOI: 10.1007/bf01420735] [Citation(s) in RCA: 139] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
42
|
|
43
|
Computation of discontinuous optical flow by domain decomposition and shape optimization. Int J Comput Vis 1992. [DOI: 10.1007/bf00127172] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
44
|
|
45
|
|
46
|
Gamble E, Geiger D, Poggio T, Weinshall D. Integration of vision modules and labeling of surface discontinuities. ACTA ACUST UNITED AC 1989. [DOI: 10.1109/21.44072] [Citation(s) in RCA: 25] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
47
|
Waxman AM, Kamgar-Parsi B, Subbarao M. Closed-form solutions to image flow equations for 3D structure and motion. Int J Comput Vis 1988. [DOI: 10.1007/bf00127823] [Citation(s) in RCA: 24] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|