1
|
Zhao B, Zhang K, Liu P, Chen Y. Large-scale time-lapse scanning electron microscopy image mosaic using a smooth stitching strategy. Microsc Res Tech 2023. [PMID: 37119500 DOI: 10.1002/jemt.24334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 03/23/2023] [Accepted: 04/15/2023] [Indexed: 05/01/2023]
Abstract
Due to the trade-off between the field of view and resolution of various microscopes, obtaining a wide-view panoramic image through high-resolution image tiles is frequently encountered and demanded in numerous applications. Here, we propose an automatic image mosaic strategy for sequential 2D time-lapse scanning electron microscopy (SEM) images. This method can accurately compute pairwise translations among serial image tiles with indeterminate overlapping areas. The detection and matching of feature points are limited by geographical coordinates, thus avoiding accidental mismatching. Moreover, the nonlinear deformation of the mosaic part is also taken into account. A smooth stitching field is utilized to gradually transform the perspective transformation in overlapping regions into the linear transformation in non-overlapping regions. Experimental results demonstrate that better image stitching accuracy can be achieved compared with some other image mosaic algorithms. Such a method has potential applications in high-resolution large-area analysis using serial microscopy images. RESEARCH HIGHLIGHTS: An automatic image mosaic strategy for processing sequential scanning electron microscopy images is proposed. A smooth stitching field is applied in the image mosaic. Improved stitching accuracy is achieved compared with other conventional mosaic methods.
Collapse
Affiliation(s)
- Binglu Zhao
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, China
| | - Kaidi Zhang
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, China
| | - Peng Liu
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, China
| | - Yuhang Chen
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, China
| |
Collapse
|
2
|
Aganj I, Fischl B. Intermediate Deformable Image Registration via Windowed Cross-Correlation. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230715. [PMID: 37691967 PMCID: PMC10485808 DOI: 10.1109/isbi53787.2023.10230715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
In population and longitudinal imaging studies that employ deformable image registration, more accurate results can be achieved by initializing deformable registration with the results of affine registration where global misalignments have been considerably reduced. Such affine registration, however, is limited to linear transformations and it cannot account for large nonlinear anatomical variations, such as those between pre- and post-operative images or across different subject anatomies. In this work, we introduce a new intermediate deformable image registration (IDIR) technique that recovers large deformations via windowed cross-correlation, and provide an efficient implementation based on the fast Fourier transform. We evaluate our method on 2D X-ray and 3D magnetic resonance images, demonstrating its ability to align substantial nonlinear anatomical variations within a few iterations.
Collapse
Affiliation(s)
- Iman Aganj
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| |
Collapse
|
3
|
Moniruzzaman MD, Rassau A, Chai D, Islam SMS. Long future frame prediction using optical flow‐informed deep neural networks for enhancement of robotic teleoperation in high latency environments. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- M. D. Moniruzzaman
- School of Engineering Edith Cowan University Joondalup Western Australia Australia
| | - Alexander Rassau
- School of Engineering Edith Cowan University Joondalup Western Australia Australia
| | - Douglas Chai
- School of Engineering Edith Cowan University Joondalup Western Australia Australia
| | | |
Collapse
|
4
|
Motion estimation for large displacements and deformations. Sci Rep 2022; 12:19721. [PMID: 36385172 PMCID: PMC9668979 DOI: 10.1038/s41598-022-21987-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 10/07/2022] [Indexed: 11/17/2022] Open
Abstract
Large displacement optical flow is an integral part of many computer vision tasks. Variational optical flow techniques based on a coarse-to-fine scheme interpolate sparse matches and locally optimize an energy model conditioned on colour, gradient and smoothness, making them sensitive to noise in the sparse matches, deformations, and arbitrarily large displacements. This paper addresses this problem and presents HybridFlow, a variational motion estimation framework for large displacements and deformations. A multi-scale hybrid matching approach is performed on the image pairs. Coarse-scale clusters formed by classifying pixels according to their feature descriptors are matched using the clusters' context descriptors. We apply a multi-scale graph matching on the finer-scale superpixels contained within each matched pair of coarse-scale clusters. Small clusters that cannot be further subdivided are matched using localized feature matching. Together, these initial matches form the flow, which is propagated by an edge-preserving interpolation and variational refinement. Our approach does not require training and is robust to substantial displacements and rigid and non-rigid transformations due to motion in the scene, making it ideal for large-scale imagery such as aerial imagery. More notably, HybridFlow works on directed graphs of arbitrary topology representing perceptual groups, which improves motion estimation in the presence of significant deformations. We demonstrate HybridFlow's superior performance to state-of-the-art variational techniques on two benchmark datasets and report comparable results with state-of-the-art deep-learning-based techniques.
Collapse
|
5
|
de Jong DB, Paredes-Valles F, de Croon GCHE. How Do Neural Networks Estimate Optical Flow? A Neuropsychology-Inspired Study. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:8290-8305. [PMID: 34033535 DOI: 10.1109/tpami.2021.3083538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
End-to-end trained convolutional neural networks have led to a breakthrough in optical flow estimation. The most recent advances focus on improving the optical flow estimation by improving the architecture and setting a new benchmark on the publicly available MPI-Sintel dataset. Instead, in this article, we investigate how deep neural networks estimate optical flow. A better understanding of how these networks function is important for (i) assessing their generalization capabilities to unseen inputs, and (ii) suggesting changes to improve their performance. For our investigation, we focus on FlowNetS, as it is the prototype of an encoder-decoder neural network for optical flow estimation. Furthermore, we use a filter identification method that has played a major role in uncovering the motion filters present in animal brains in neuropsychological research. The method shows that the filters in the deepest layer of FlowNetS are sensitive to a variety of motion patterns. Not only do we find translation filters, as demonstrated in animal brains, but thanks to the easier measurements in artificial neural networks, we even unveil dilation, rotation, and occlusion filters. Furthermore, we find similarities in the refinement part of the network and the perceptual filling-in process which occurs in the mammal primary visual cortex.
Collapse
|
6
|
Salehi A, Balasubramanian M. DDCNet-Multires: Effective Receptive Field Guided Multiresolution CNN for Dense Prediction. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-11039-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
7
|
Adaptive Superpixel-Based Disparity Estimation Algorithm Using Plane Information and Disparity Refining Mechanism in Stereo Matching. Symmetry (Basel) 2022. [DOI: 10.3390/sym14051005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The motivation of this paper is to address the limitations of the conventional keypoint-based disparity estimation methods. Conventionally, disparity estimation is usually based on the local information of keypoints. However, keypoints may distribute sparsely in the smooth region, and keypoints with the same descriptors may appear in a symmetric pattern. Therefore, conventional keypoint-based disparity estimation methods may have limited performance in smooth and symmetric regions. The proposed algorithm is superpixel-based. Instead of performing keypoint matching, both keypoint and semiglobal information are applied to determine the disparity in the proposed algorithm. Since the local information of keypoints and the semi-global information of the superpixel are both applied, the accuracy of disparity estimation can be improved, especially for smooth and symmetric regions. Moreover, to address the non-uniform distribution problem of keypoints, a disparity refining mechanism based on the similarity and the distance of neighboring superpixels is applied to correct the disparity of the superpixel with no or few keypoints. The experiments show that the disparity map generated by the proposed algorithm has a lower matching error rate than that generated by other methods.
Collapse
|
8
|
Xu J, Tao M, Zhang S, Jiang X, Tan J. Non-rigid registration of biomedical image for radiotherapy based on adaptive feature density flow. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
9
|
Deng Y, Xiao J, Zhou SZ, Feng J. Detail Preserving Coarse-to-Fine Matching for Stereo Matching and Optical Flow. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:5835-5847. [PMID: 34138709 DOI: 10.1109/tip.2021.3088635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The Coarse-To-Fine (CTF) matching scheme has been widely applied to reduce computational complexity and matching ambiguity in stereo matching and optical flow tasks by converting image pairs into multi-scale representations and performing matching from coarse to fine levels. Despite its efficiency, it suffers from several weaknesses, such as tending to blur the edges and miss small structures like thin bars and holes. We find that the pixels of small structures and edges are often assigned with wrong disparity/flow in the upsampling process of the CTF framework, introducing errors to the fine levels and leading to such weaknesses. We observe that these wrong disparity/flow values can be avoided if we select the best-matched value among their neighborhood, which inspires us to propose a novel differentiable Neighbor-Search Upsampling (NSU) module. The NSU module first estimates the matching scores and then selects the best-matched disparity/flow for each pixel from its neighbors. It effectively preserves finer structure details by exploiting the information from the finer level while upsampling the disparity/flow. The proposed module can be a drop-in replacement of the naive upsampling in the CTF matching framework and allows the neural networks to be trained end-to-end. By integrating the proposed NSU module into a baseline CTF matching network, we design our Detail Preserving Coarse-To-Fine (DPCTF) matching network. Comprehensive experiments demonstrate that our DPCTF can boost performances for both stereo matching and optical flow tasks. Notably, our DPCTF achieves new state-of-the-art performances for both tasks - it outperforms the competitive baseline (Bi3D) by 28.8% (from 0.73 to 0.52) on EPE of the FlyingThings3D stereo dataset, and ranks first in KITTI flow 2012 benchmark. The code is available at https://github.com/Deng-Y/DPCTF.
Collapse
|
10
|
Alphonse P, Sriharsha K. Vision based distance estimation from single RGB camera using field of view and magnification measurements –an AI based non triangulation technique for person distance estimation in surveillance areas. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Depth data from conventional cameras in monitoring fields provides a thorough assessment of human behavior. In this context, the depth of each viewpoint must be calculated using binocular stereo, which requires two cameras to retrieve 3D data. In networked surveillance environments, this drives excess energy and also provides extra infrastructure. We launched a new computational photographic technique for depth estimation using a single camera based on the ideas of perspective projection and lens magnification property. The person to camera distance (or depth) is obtained from understanding the focal length, field of view and magnification characteristics. Prior to finding distance, initially real height is estimated using Human body anthropometrics. These metrics are given as inputs to the Gradient-Boosting machine learning algorithm for estimating Real Height. And then magnification and Field of View measurements are extracted for each sample. The depth (or distance) is predicted on the basis of the geometrical relationship between field of view, magnification and camera at object distance. Using physical distance and height measurements taken in real time as ground truth, experimental validation is performed and it is inferred that with in 3m–7 m range, both in indoor and outdoor environments, the camera to person distance (Preddist) anticipated from field of view and magnification is 91% correlated with actual depth at a confidence point of 95% with RMSE of 0.579.
Collapse
Affiliation(s)
- P.J.A. Alphonse
- Department of Computer Applications, NIT Trichy, Tamil Nadu, India
| | - K.V. Sriharsha
- Research Scholar, Department of Computer Applications, NIT Trichy, Tamil Nadu, India
| |
Collapse
|
11
|
Juřík M, Šmídl V, Mach F. Trade-off between resolution and frame rate of visual tracking of mini-robots on an experimental planar platform. JOURNAL OF MICRO-BIO ROBOTICS 2020. [DOI: 10.1007/s12213-020-00134-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
12
|
3D Hermite Transform Optical Flow Estimation inLeft Ventricle CT Sequences. SENSORS 2020; 20:s20030595. [PMID: 31973153 PMCID: PMC7038175 DOI: 10.3390/s20030595] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 12/19/2019] [Accepted: 01/10/2020] [Indexed: 12/23/2022]
Abstract
Heart diseases are the most important causes of death in the world and over the years, thestudy of cardiac movement has been carried out mainly in two dimensions, however, it is important toconsider that the deformations due to the movement of the heart occur in a three-dimensional space.The 3D + t analysis allows to describe most of the motions of the heart, for example, the twistingmotion that takes place on every beat cycle that allows us identifying abnormalities of the heartwalls. Therefore, it is necessary to develop algorithms that help specialists understand the cardiacmovement. In this work, we developed a new approach to determine the cardiac movement inthree dimensions using a differential optical flow approach in which we use the steered Hermitetransform (SHT) which allows us to decompose cardiac volumes taking advantage of it as a model ofthe human vision system (HVS). Our proposal was tested in complete cardiac computed tomography(CT) volumes ( 3D + t), as well as its respective left ventricular segmentation. The robustness tonoise was tested with good results. The evaluation of the results was carried out through errors inforwarding reconstruction, from the volume at time t to time t + 1 using the optical flow obtained(interpolation errors). The parameters were tuned extensively. In the case of the 2D algorithm, theinterpolation errors and normalized interpolation errors are very close and below the values reportedin ground truth flows. In the case of the 3D algorithm, the results were compared with another similarmethod in 3D and the interpolation errors remained below 0.1. These results of interpolation errorsfor complete cardiac volumes and the left ventricle are shown graphically for clarity. Finally, a seriesof graphs are observed where the characteristic of contraction and dilation of the left ventricle isevident through the representation of the 3D optical flow.
Collapse
|
13
|
Gupta S, Mukherjee P, Chaudhury S, Lall B. U-RME: Underwater Refined Motion Estimation in Hazy, Cluttered and Dynamic Environments. COMMUNICATIONS IN COMPUTER AND INFORMATION SCIENCE 2020:198-208. [DOI: 10.1007/978-981-15-8697-2_18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
14
|
Anthwal S, Ganotra D. An overview of optical flow-based approaches for motion segmentation. THE IMAGING SCIENCE JOURNAL 2019. [DOI: 10.1080/13682199.2019.1641316] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Shivangi Anthwal
- Department of Applied Science and Humanities, Indira Gandhi Delhi Technical University for Women, Delhi, India
| | - Dinesh Ganotra
- Department of Applied Science and Humanities, Indira Gandhi Delhi Technical University for Women, Delhi, India
| |
Collapse
|
15
|
Castillo E. Quadratic penalty method for intensity-based deformable image registration and 4DCT lung motion recovery. Med Phys 2019; 46:2194-2203. [PMID: 30801729 DOI: 10.1002/mp.13457] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 11/09/2022] Open
Abstract
Intensity-based deformable image registration (DIR) requires minimizing an image dissimilarity metric. Imaged anatomy, such as bones and vasculature, as well as the resolution of the digital grid, can often cause discontinuities in the corresponding objective function. Consequently, the application of a gradient-based optimization algorithm requires a preprocessing image smoothing to ensure the existence of necessary image derivatives. Simple block matching (exhaustive search) methods do not require image derivative approximations, but their general effectiveness is often hindered by erroneous solutions (outliers). Block match methods are therefore often coupled with a statistical outlier detection method to improve results. PURPOSE The purpose of this work is to present a spatially accurate, intensity-based DIR optimization formulation that can be solved with a straightforward gradient-free quadratic penalty algorithm and is suitable for 4D thoracic computed tomography (4DCT) registration. Additionally, a novel regularization strategy based on the well-known leave-one-out robust statistical model cross-validation method is introduced. METHODS The proposed Quadratic Penalty DIR (QPDIR) method minimizes both an image dissimilarity term, which is separable with respect to individual voxel displacements, and a regularization term derived from the classical leave-one-out cross-validation statistical method. The resulting DIR problem lends itself to a quadratic penalty function optimization approach, where each subproblem can be solved by straightforward block coordinate descent iteration. RESULTS The spatial accuracy of the method was assessed using expert-determined landmarks on ten 4DCT datasets available on www.dir-lab.com. The QPDIR algorithm achieved average millimeter spatial errors between 0.69 (0.91) and 1.19 (1.26) on the ten test cases. On all ten 4DCT test cases, the QPDIR method produced spatial accuracies that are superior or equivalent to those produced by current state-of-the-art methods. Moreover, QPDIR achieved accuracies at the resolution of the landmark error assessment (i.e., the interobserver error) on six of the ten cases. CONCLUSION The QPDIR algorithm is based on a simple quadratic penalty function formulation and a regularization term inspired by leave-one-out cross validation. The formulation lends itself to a parallelizable, gradient-free, block coordinate descent numerical optimization method. Numerical results indicate that the method achieves a high spatial accuracy on 4DCT inhale/exhale phases.
Collapse
Affiliation(s)
- Edward Castillo
- Department of Radiation Oncology, Beaumont Health Systems, Royal Oak, MI, USA.,Department of Computation and Applied Mathematics, Rice University, Houston, TX, USA
| |
Collapse
|
16
|
Esfandiari H, Lichti D, Anglin C. Single-camera visual odometry to track a surgical X-ray C-arm base. Proc Inst Mech Eng H 2017; 231:1140-1151. [PMID: 29039259 DOI: 10.1177/0954411917735556] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and 4% for all the cases studied and angular accuracy of better than 2% (of absolute cumulative changes in orientation) were achieved with this method. This study provides a robust and accurate tracking framework that not only can be integrated with the current C-arm joint-tracking system (i.e. TC-arm) but also is capable of being employed for similar applications in other fields (e.g. robotics).
Collapse
Affiliation(s)
- Hooman Esfandiari
- 1 Department of Geomatics Engineering, University of Calgary, Calgary, AB, Canada.,2 University of British Columbia, Surgical Technologies Lab, Vancouver, BC, Canada
| | - Derek Lichti
- 1 Department of Geomatics Engineering, University of Calgary, Calgary, AB, Canada
| | - Carolyn Anglin
- 3 Department of Civil Engineering, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
17
|
Zhang C, Chen Z, Wang M, Li M, Jiang S. Robust Non-Local TV- $L^{1}$ Optical Flow Estimation With Occlusion Detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:4055-4067. [PMID: 28600243 DOI: 10.1109/tip.2017.2712279] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, we propose a robust non-local TV-L1 optical flow method with occlusion detection to address the problem of weak robustness of optical flow estimation with motion occlusion. First, a TV-L1 form for flow estimation is defined using a combination of the brightness constancy and gradient constancy assumptions in the data term and by varying the weight under the Charbonnier function in the smoothing term. Second, to handle the potential risk of the outlier in the flow field, a general non-local term is added in the TV-L1 optical flow model to engender the typical non-local TV-L1 form. Third, an occlusion detection method based on triangulation is presented to detect the occlusion regions of the sequence. The proposed non-local TV-L1 optical flow model is performed in a linearizing iterative scheme using improved median filtering and a coarse-to-fine computing strategy. The results of the complex experiment indicate that the proposed method can overcome the significant influence of non-rigid motion, motion occlusion, and large displacement motion. Results of experiments comparing the proposed method and existing state-of-the-art methods by, respectively, using Middlebury and MPI Sintel database test sequences show that the proposed method has higher accuracy and better robustness.
Collapse
|
18
|
Vig DK, Hamby AE, Wolgemuth CW. On the Quantification of Cellular Velocity Fields. Biophys J 2016; 110:1469-1475. [PMID: 27074673 DOI: 10.1016/j.bpj.2016.02.032] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2015] [Revised: 01/29/2016] [Accepted: 02/15/2016] [Indexed: 01/15/2023] Open
Abstract
The application of flow visualization in biological systems is becoming increasingly common in studies ranging from intracellular transport to the movements of whole organisms. In cell biology, the standard method for measuring cell-scale flows and/or displacements has been particle image velocimetry (PIV); however, alternative methods exist, such as optical flow constraint. Here we review PIV and optical flow, focusing on the accuracy and efficiency of these methods in the context of cellular biophysics. Although optical flow is not as common, a relatively simple implementation of this method can outperform PIV and is easily augmented to extract additional biophysical/chemical information such as local vorticity or net polymerization rates from speckle microscopy.
Collapse
Affiliation(s)
- Dhruv K Vig
- Departments of Physics and Molecular and Cellular Biology, University of Arizona, Tucson, Arizona
| | - Alex E Hamby
- Departments of Physics and Molecular and Cellular Biology, University of Arizona, Tucson, Arizona
| | - Charles W Wolgemuth
- Departments of Physics and Molecular and Cellular Biology, University of Arizona, Tucson, Arizona.
| |
Collapse
|
19
|
Deformable regions of interest with multiple points for tissue tracking in echocardiography. Med Image Anal 2016; 35:554-569. [PMID: 27664372 DOI: 10.1016/j.media.2016.08.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2015] [Revised: 07/25/2016] [Accepted: 08/10/2016] [Indexed: 11/21/2022]
Abstract
By tracking echocardiography images more accurately and stably, we can better assess myocardial functions. In this paper, we propose a new tracking method with deformable Regions of Interest (ROIs) aiming at rational pattern matching. For this purpose we defined multiple tracking points for an ROI and regarded these points as nodes in the Meshfree Method to interpolate displacement fields. To avoid unreasonable distortion of the ROI caused by noise and perturbation in echo images, we introduced a stabilization technique based on a nonlinear strain energy function. Examples showed that the combination of our new tracking method and stabilization technique provides competitive and stable tracking.
Collapse
|
20
|
Gazi PM, Aminololama-Shakeri S, Yang K, Boone JM. Temporal subtraction contrast-enhanced dedicated breast CT. Phys Med Biol 2016; 61:6322-46. [PMID: 27494376 DOI: 10.1088/0031-9155/61/17/6322] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The development of a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, intensity difference adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using normalized cross correlation (NCC), symmetric uncertainty coefficient, normalized mutual information (NMI), mean square error (MSE) and target registration error (TRE). The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE (0-16%), NCC (0-6%), NMI (0-13%) and TRE (0-34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake.
Collapse
Affiliation(s)
- Peymon M Gazi
- Department of Biomedical Engineering, University of California, Davis, One Shields Avenue, Davis, CA 95616, USA. Department of Radiology, University of California, Davis Medical Center, 4860 Y street, Suite 3100 Ellison Building, Sacramento, CA 95817, USA
| | | | | | | |
Collapse
|
21
|
Abstract
In this article, we present a general frame for a system of au tomatic modeling and recognition of 3D polyhedral objects. Such a system has many applications for robotics: e.g., recog nition, localization, and grasping. Here we focus on one main aspect of the system: when many images of one 3D object are taken from different unknown viewpoints, how to recognize those that represent the same aspect of the object? Briefly, is it possible to determine automatically if two images are similar or not? The two stages detailed in the article are the matching of two images and the clustering of a set of images. Matching consists of finding the common features of two images while no information is known about the image contents, the motion, or the calibration of the camera. Clustering consists of regrouping into sets the images representing a same aspect of the modeled objects. For both stages, experimental results on real images are shown.
Collapse
Affiliation(s)
- Patrick Gros
- LIFIA-IMAC INRIA Rhône-Alpes 38031 Grenoble Cedex 1, France
| |
Collapse
|
22
|
Pilutti D, Strumia M, Buchert M, Hadjidemetriou S. Non-Parametric Bayesian Registration (NParBR) of Body Tumors in DCE-MRI Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1025-1035. [PMID: 26672032 DOI: 10.1109/tmi.2015.2506338] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The identification of tumors in the internal organs of chest, abdomen, and pelvis anatomic regions can be performed with the analysis of Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) data. The contrast agent is accumulated differently by pathologic and healthy tissues and that results in a temporally varying contrast in an image series. The internal organs are also subject to potentially extensive movements mainly due to breathing, heart beat, and peristalsis. This contributes to making the analysis of DCE-MRI datasets challenging as well as time consuming. To address this problem we propose a novel pairwise non-rigid registration method with a Non-Parametric Bayesian Registration (NParBR) formulation. The NParBR method uses a Bayesian formulation that assumes a model for the effect of the distortion on the joint intensity statistics, a non-parametric prior for the restored statistics, and also applies a spatial regularization for the estimated registration with Gaussian filtering. A minimally biased intra-dataset atlas is computed for each dataset and used as reference for the registration of the time series. The time series registration method has been tested with 20 datasets of liver, lungs, intestines, and prostate. It has been compared to the B-Splines and to the SyN methods with results that demonstrate that the proposed method improves both accuracy and efficiency.
Collapse
|
23
|
Giulioni M, Lagorce X, Galluppi F, Benosman RB. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform. Front Neurosci 2016; 10:35. [PMID: 26909015 PMCID: PMC4754434 DOI: 10.3389/fnins.2016.00035] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2015] [Accepted: 01/28/2016] [Indexed: 11/13/2022] Open
Abstract
Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks.
Collapse
Affiliation(s)
| | - Xavier Lagorce
- Vision and Natural Computation Group, Institut National de la Santé et de la Recherche MédicaleParis, France; Sorbonne Universités, Institut de la Vision, Université de Paris 06 Pierre et Marie Curie, Centre National de la Recherche ScientifiqueParis, France
| | - Francesco Galluppi
- Vision and Natural Computation Group, Institut National de la Santé et de la Recherche MédicaleParis, France; Sorbonne Universités, Institut de la Vision, Université de Paris 06 Pierre et Marie Curie, Centre National de la Recherche ScientifiqueParis, France
| | - Ryad B Benosman
- Vision and Natural Computation Group, Institut National de la Santé et de la Recherche MédicaleParis, France; Sorbonne Universités, Institut de la Vision, Université de Paris 06 Pierre et Marie Curie, Centre National de la Recherche ScientifiqueParis, France
| |
Collapse
|
24
|
Cao Z, Dong E, Zheng Q, Sun W, Li Z. Accurate inverse-consistent symmetric optical flow for 4D CT lung registration. Biomed Signal Process Control 2016. [DOI: 10.1016/j.bspc.2015.09.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
25
|
Steidl G. Combined First and Second Order Variational Approaches for Image Processing. ACTA ACUST UNITED AC 2015. [DOI: 10.1365/s13291-015-0113-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
26
|
Bao L, Yang Q, Jin H. Fast edge-preserving PatchMatch for large displacement optical flow. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:4996-5006. [PMID: 25252282 DOI: 10.1109/tip.2014.2359374] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The speed of optical flow algorithm is crucial for many video editing tasks such as slow motion synthesis, selection propagation, tone adjustment propagation, and so on. Variational coarse-to-fine optical flow algorithms can generally produce high-quality results but cannot fulfil the speed requirement of many practical applications. Besides, large motions in real-world videos also pose a difficult problem to coarse-to-fine variational approaches. We, in this paper, present a fast optical flow algorithm that can handle large displacement motions. Our algorithm is inspired by recent successes of local methods in visual correspondence searching as well as approximate nearest neighbor field algorithms. The main novelty is a fast randomized edge-preserving approximate nearest neighbor field algorithm, which propagates self-similarity patterns in addition to offsets. Experimental results on public optical flow benchmarks show that our method is significantly faster than state-of-the-art methods without compromising on quality, especially when scenes contain large motions. Finally, we show some demo applications by applying our technique into real-world video editing tasks.
Collapse
|
27
|
Castillo E, Castillo R, Fuentes D, Guerrero T. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data. Med Phys 2014; 41:041904. [PMID: 24694135 DOI: 10.1118/1.4866891] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. METHODS The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimall1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. RESULTS The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download atwww.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. CONCLUSIONS The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations.
Collapse
Affiliation(s)
- Edward Castillo
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Unit 56 Houston, Texas 77030 and Department of Computational and Applied Mathematics, Rice University, 6100 Main MS-134, Houston, Texas 77005
| | - Richard Castillo
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Unit 56 Houston, Texas 77030
| | - David Fuentes
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Unit 1902 Houston, Texas 77030
| | - Thomas Guerrero
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Unit 56 Houston, Texas 77030 and Department of Computational and Applied Mathematics, Rice University, 6100 Main MS-134, Houston, Texas 77005
| |
Collapse
|
28
|
Zhang Z, Liu F, Tsui H, Lau Y, Song X. A multiscale adaptive mask method for rigid intraoperative ultrasound and preoperative CT image registration. Med Phys 2014; 41:102903. [DOI: 10.1118/1.4895824] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
29
|
Arterial mechanical motion estimation based on a semi-rigid body deformation approach. SENSORS 2014; 14:9429-50. [PMID: 24871987 PMCID: PMC4118363 DOI: 10.3390/s140609429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Revised: 04/18/2014] [Accepted: 05/21/2014] [Indexed: 12/02/2022]
Abstract
Arterial motion estimation in ultrasound (US) sequences is a hard task due to noise and discontinuities in the signal derived from US artifacts. Characterizing the mechanical properties of the artery is a promising novel imaging technique to diagnose various cardiovascular pathologies and a new way of obtaining relevant clinical information, such as determining the absence of dicrotic peak, estimating the Augmentation Index (AIx), the arterial pressure or the arterial stiffness. One of the advantages of using US imaging is the non-invasive nature of the technique unlike Intra Vascular Ultra Sound (IVUS) or angiography invasive techniques, plus the relative low cost of the US units. In this paper, we propose a semi rigid deformable method based on Soft Bodies dynamics realized by a hybrid motion approach based on cross-correlation and optical flow methods to quantify the elasticity of the artery. We evaluate and compare different techniques (for instance optical flow methods) on which our approach is based. The goal of this comparative study is to identify the best model to be used and the impact of the accuracy of these different stages in the proposed method. To this end, an exhaustive assessment has been conducted in order to decide which model is the most appropriate for registering the variation of the arterial diameter over time. Our experiments involved a total of 1620 evaluations within nine simulated sequences of 84 frames each and the estimation of four error metrics. We conclude that our proposed approach obtains approximately 2.5 times higher accuracy than conventional state-of-the-art techniques.
Collapse
|
30
|
Cannons KJ, Wildes RP. The Applicability of Spatiotemporal Oriented Energy Features to Region Tracking. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2014; 36:784-796. [PMID: 26353200 DOI: 10.1109/tpami.2013.233] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper proposes the novel application of an uncommonly rich feature representation to the domain of visual tracking. The proposed representation for tracking models both the spatial structure and dynamics of a target in a unified fashion, while simultaneously offering robustness to illumination variations. Specifically, the proposed feature is derived from spatiotemporal energy measurements that are computed by filtering in 3D, (x, y, t), image spacetime. These spatiotemporal energy measurements capture the underlying local spacetime orientation structure of the target across multiple scales. The breadth of applicability of these features within the field of visual tracking is demonstrated by their instantiation within three disparate tracking paradigms that are representative of the various basic types of region trackers in the field. Instantiation within these three tracking paradigms requires that the raw oriented energy measurements be post-processed using different methodologies that range from histogram accumulation to the identity transform. Qualitative and quantitative empirical evaluation on a challenging suite of videos demonstrates the strength and applicability of the proposed representation to tracking, as it outperforms other commonly-used features across all tracking paradigms. Moreover, it is shown that overall high tracking accuracy can be obtained with this proposed representation, as spatiotemporal oriented energy instantiations are shown to outperform several recent, state-of-the-art trackers.
Collapse
|
31
|
Skonieczny K, Moreland SJ, Asnani VM, Creager CM, Inotsume H, Wettergreen DS. Visualizing and Analyzing Machine-soil Interactions using Computer Vision. J FIELD ROBOT 2014. [DOI: 10.1002/rob.21510] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Krzysztof Skonieczny
- Field Robotics Center, Carnegie Mellon University; 5000 Forbes Avenue Pittsburgh Pennsylvania 15213
| | - Scott J. Moreland
- Field Robotics Center, Carnegie Mellon University; 5000 Forbes Avenue Pittsburgh Pennsylvania 15213
| | - Vivake M. Asnani
- NASA Glenn Research Center; Mail Stop 23-3, 21000 Brookpark Road Cleveland Ohio 44135
| | - Colin M. Creager
- NASA Glenn Research Center; Mail Stop 23-3, 21000 Brookpark Road Cleveland Ohio 44135
| | - Hiroaki Inotsume
- Field Robotics Center, Carnegie Mellon University; 5000 Forbes Avenue Pittsburgh Pennsylvania 15213
| | - David S. Wettergreen
- Field Robotics Center, Carnegie Mellon University; 5000 Forbes Avenue Pittsburgh Pennsylvania 15213
| |
Collapse
|
32
|
Li J, Zhou Y, Ivanov K, Zheng YP. Estimation and visualization of longitudinal muscle motion using ultrasonography: a feasibility study. ULTRASONICS 2014; 54:779-788. [PMID: 24206676 DOI: 10.1016/j.ultras.2013.09.024] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2013] [Revised: 08/31/2013] [Accepted: 09/27/2013] [Indexed: 06/02/2023]
Abstract
Ultrasonography is a convenient and widely used technique to look into the longitudinal muscle motion as it is radiation-free and real-time. The motion of localized parts of the muscle, disclosed by ultrasonography, spatially reflects contraction activities of the corresponding muscles. However, little attention was paid to the estimation of longitudinal muscle motion, especially towards estimation of dense deformation field at different depths under the skin. Yet fewer studies on the visualization of such muscle motion or further clinical applications were reported in the literature. A primal-dual algorithm was used to estimate the motion of gastrocnemius muscle (GM) in longitudinal direction in this study. To provide insights into the rules of longitudinal muscle motion, we proposed a novel framework including motion estimation, visualization and quantitative analysis to interpret synchronous activities of collaborating muscles with spatial details. The proposed methods were evaluated on ultrasound image sequences, captured at a rate of 25 frames per second from eight healthy subjects. In order to estimate and visualize the GM motion in longitudinal direction, each subject was asked to perform isometric plantar flexion twice. Preliminary results show that the proposed visualization methods provide both spatial and temporal details and they are helpful to study muscle contractions. One of the proposed quantitative measures was also tested on a patient with unilateral limb dysfunction caused by cerebral infarction. The measure revealed distinct patterns between the normal and the dysfunctional lower limb. The proposed framework and its associated quantitative measures could potentially be used to complement electromyography (EMG) and torque signals in functional assessment of skeletal muscles.
Collapse
Affiliation(s)
- Jizhou Li
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| | - Yongjin Zhou
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China; Interdisciplinary Division of Biomedical Engineering, The Hong Kong Polytechnic University, China.
| | - Kamen Ivanov
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| | - Yong-Ping Zheng
- Interdisciplinary Division of Biomedical Engineering, The Hong Kong Polytechnic University, China
| |
Collapse
|
33
|
Benosman R, Clercq C, Lagorce X, Ieng SH, Bartolozzi C. Event-based visual flow. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:407-417. [PMID: 24807038 DOI: 10.1109/tnnls.2013.2273537] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina. Biological retinas, and their artificial counterparts, are totally asynchronous and data-driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework to estimate visual flow from the local properties of events' spatiotemporal space. We will show that precise visual flow orientation and amplitude can be estimated using a local differential approach on the surface defined by coactive events. Experimental results are presented; they show the method adequacy with high data sparseness and temporal resolution of event-based acquisition that allows the computation of motion flow with microsecond accuracy and at very low computational cost.
Collapse
|
34
|
Fuzzy segmentation of video shots using hybrid color spaces and motion information. Pattern Anal Appl 2013. [DOI: 10.1007/s10044-013-0359-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
35
|
Chao H, Gu Y, Napolitano M. A Survey of Optical Flow Techniques for Robotics Navigation Applications. J INTELL ROBOT SYST 2013. [DOI: 10.1007/s10846-013-9923-6] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
36
|
Chen D, Sheng H, Chen Y, Xue D. Fractional-order variational optical flow model for motion estimation. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2013; 371:20120148. [PMID: 23547225 DOI: 10.1098/rsta.2012.0148] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
A new class of fractional-order variational optical flow models, which generalizes the differential of optical flow from integer order to fractional order, is proposed for motion estimation in this paper. The corresponding Euler-Lagrange equations are derived by solving a typical fractional variational problem, and the numerical implementation based on the Grünwald-Letnikov fractional derivative definition is proposed to solve these complicated fractional partial differential equations. Theoretical analysis reveals that the proposed fractional-order variational optical flow model is the generalization of the typical Horn and Schunck (first-order) variational optical flow model and the second-order variational optical flow model, which provides a new idea for us to study the optical flow model and has an important theoretical implication in optical flow model research. The experiments demonstrate the validity of the generalization of differential order.
Collapse
Affiliation(s)
- Dali Chen
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, People's Republic of China
| | | | | | | |
Collapse
|
37
|
Mac Aodha O, Humayun A, Pollefeys M, Brostow GJ. Learning a confidence measure for optical flow. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:1107-1120. [PMID: 22868652 DOI: 10.1109/tpami.2012.171] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
We present a supervised learning-based method to estimate a per-pixel confidence for optical flow vectors. Regions of low texture and pixels close to occlusion boundaries are known to be difficult for optical flow algorithms. Using a spatiotemporal feature vector, we estimate if a flow algorithm is likely to fail in a given region. Our method is not restricted to any specific class of flow algorithm and does not make any scene specific assumptions. By automatically learning this confidence, we can combine the output of several computed flow fields from different algorithms to select the best performing algorithm per pixel. Our optical flow confidence measure allows one to achieve better overall results by discarding the most troublesome pixels. We illustrate the effectiveness of our method on four different optical flow algorithms over a variety of real and synthetic sequences. For algorithm selection, we achieve the top overall results on a large test set, and at times even surpass the results of the best algorithm among the candidates.
Collapse
Affiliation(s)
- Oisin Mac Aodha
- Department of Computer Science, University College London, London, United Kingdom.
| | | | | | | |
Collapse
|
38
|
Pipa DR, da Silva EAB, Pagliari CL, Diniz PSR. Recursive algorithms for bias and gain nonuniformity correction in infrared videos. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:4758-4769. [PMID: 22997263 DOI: 10.1109/tip.2012.2218820] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.
Collapse
Affiliation(s)
- Daniel R Pipa
- Universidade Federal do Rio de Janeiro, Rio de Janeiro 21945-970, Brazil.
| | | | | | | |
Collapse
|
39
|
Duan Q, Herz S, Ingrassia C, Costa K, Holmes J, Laine A, Angelini E, Gerard O, Homma S. Dynamic cardiac information from optical flow using four dimensional ultrasound. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2012; 2005:4465-8. [PMID: 17281228 DOI: 10.1109/iembs.2005.1615458] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Quantitative analysis of cardiac motion is of great clinical interest in assessing ventricular function. Real-time 3-D (RT3D) ultrasound transducers provide valuable three-dimensional information, from which quantitative measures of cardiac function can be extracted. Such analysis requires segmentation and visual tracking of the left ventricular endocardial border. We present results based on correlation of four-dimensional optical flow motion for temporal tracking of ventricular borders in three dimensional ultrasound data. A displacement field is computed from the optical flow output, and a framework for the computation of dynamic cardiac information is introduced. The method was applied to a clinical data set from a heart transplant patient and dynamic measurements agreed with physiological knowledge as well as experimental results.
Collapse
Affiliation(s)
- Qi Duan
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
40
|
Xu L, Jia J, Matsushita Y. Motion detail preserving optical flow estimation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2012; 34:1744-1757. [PMID: 22156095 DOI: 10.1109/tpami.2011.236] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
A common problem of optical flow estimation in the multiscale variational framework is that fine motion structures cannot always be correctly estimated, especially for regions with significant and abrupt displacement variation. A novel extended coarse-to-fine (EC2F) refinement framework is introduced in this paper to address this issue, which reduces the reliance of flow estimates on their initial values propagated from the coarse level and enables recovering many motion details in each scale. The contribution of this paper also includes adaptation of the objective function to handle outliers and development of a new optimization procedure. The effectiveness of our algorithm is demonstrated by Middlebury optical flow benchmarkmarking and by experiments on challenging examples that involve large-displacement motion.
Collapse
Affiliation(s)
- Li Xu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong.
| | | | | |
Collapse
|
41
|
Chen L, Yang H, Takaki T, Ishii I. Real-Time Optical Flow Estimation Using Multiple Frame-Straddling Intervals. JOURNAL OF ROBOTICS AND MECHATRONICS 2012. [DOI: 10.20965/jrm.2012.p0686] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper, we propose a novel method for accurate optical flow estimation in real time for both high-speed and low-speed moving objects based on High-Frame-Rate (HFR) videos. We introduce a multiframe-straddling function to select several pairs of images with different frame intervals from an HFR image sequence even when the estimated optical flow is required to output at standard video rates (NTSC at 30 fps and PAL at 25 fps). The multiframestraddling function can remarkably improve the measurable range of velocities in optical flow estimation without heavy computation by adaptively selecting a small frame interval for high-speed objects and a large frame interval for low-speed objects. On the basis of the relationship between the frame intervals and the accuracies of the optical flows estimated by the Lucas–Kanade method, we devise a method to determine multiple frame intervals in optical flow estimation and select an optimal frame interval from these intervals according to the amplitude of the estimated optical flow. Our method was implemented using software on a high-speed vision platform, IDP Express. The estimated optical flows were accurately outputted at intervals of 40 ms in real time by using three pairs of 512×512 images; these images were selected by frame-straddling a 2000-fps video with intervals of 0.5, 1.5, and 5 ms. Several experiments were performed for high-speed movements to verify that our method can remarkably improve the measurable range of velocities in optical flow estimation, compared to optical flows estimated for 25-fps videos with the Lucas–Kanade method.
Collapse
|
42
|
AHRARY ALIREZA, TIAN LI, KAMATA SEIICHIRO, ISHIKAWA MASUMI. NAVIGATION OF AN AUTONOMOUS SEWER INSPECTION ROBOT BASED ON STEREO CAMERA IMAGES AND LASER SCANNER DATA. INT J ARTIF INTELL T 2011. [DOI: 10.1142/s0218213007003461] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Sewer environment is composed of cylindrical pipes, in which only a few landmarks such as manholes, inlets and pipe joints are available for localization. This paper presents a method for navigation of an autonomous sewer inspection robot in a sewer pipe system based on detection of landmarks. In this method, location of an autonomous sewer inspection robot in the sewer pipe system is estimated from stereo camera images. The laser scanner data are also used to ensure accurate localization of the landmarks and reduce the error in distance estimation by image processing. The method is implemented and evaluated in a sewer pipe test field using a prototype robot, demonstrating its effectiveness.
Collapse
Affiliation(s)
- ALIREZA AHRARY
- FAIS-Robotics Research Institute, 2-1 Hibikino, Wakamatsu-ku, Kitakyushu 808-0196, Japan
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu 808-0196, Japan
| | - LI TIAN
- Graduate School of Information, Production and System, Waseda University, 2-7 Hibikino, Wakamatsu-ku, Kitakyushu 808-0135, Japan
| | - SEI-ICHIRO KAMATA
- Graduate School of Information, Production and System, Waseda University, 2-7 Hibikino, Wakamatsu-ku, Kitakyushu 808-0135, Japan
| | - MASUMI ISHIKAWA
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu 808-0196, Japan
| |
Collapse
|
43
|
FUSIELLO ANDREA, ROBERTO VITO, TRUCCO EMANUELE. SYMMETRIC STEREO WITH MULTIPLE WINDOWING. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001400000696] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
We present a new, efficient stereo algorithm addressing robust disparity estimation in the presence of occlusions. The algorithm is an adaptive, multiwindow scheme using left–right consistency to compute disparity and its associated uncertainty. We demonstrate and discuss performances with both synthetic and real stereo pairs, and show how our results improve on those of closely related techniques for both accuracy and efficiency.
Collapse
Affiliation(s)
- ANDREA FUSIELLO
- Machine Vision Laboratory, Department of Mathematics and Informatics, University of Udine, 33100 Udine, Italy
| | - VITO ROBERTO
- Machine Vision Laboratory, Department of Mathematics and Informatics, University of Udine, 33100 Udine, Italy
| | - EMANUELE TRUCCO
- Department of Computing and Electrical Engineering, Heriot-Watt University, Edinburgh, EH14 4AS, UK
| |
Collapse
|
44
|
MÉMIN ÉTIENNE, RISSET TANGUY. ON THE STUDY OF VLSI DERIVATION FOR OPTICAL FLOW ESTIMATION. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001400000295] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper we propose studying several ways to implement a realistic and efficient VLSI design for a gradient-based dense motion estimator. The kind of estimator we focus on belongs to the class of differential methods. It is classically based on the optical flow constraint equation in association with a smoothness regularization term and also incorporates robust cost functions to alleviate the influence of large residuals. This estimator is expressed as the minimization of a global energy function defined within the framework of an incremental formulation associated with a multiresolution setup. In order to make possible the conception of efficient hardware, we consider a modified minimization strategy. This new minimization strategy is not only well suited to VLSI derivation, it is also very efficient in terms of quality of the result. The complete VLSI derivation is realized using high-level specifications.
Collapse
|
45
|
EVANGELIDIS GEORGIOSD, PSARAKIS EMMANOUILZ. AN ECC-BASED ITERATIVE ALGORITHM FOR PHOTOMETRIC INVARIANT PROJECTIVE REGISTRATION. INT J ARTIF INTELL T 2011. [DOI: 10.1142/s021821300900007x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The ability of an algorithm to accurately estimate the parameters of the geometric transformation which aligns two image profiles even in the presence of photometric distortions can be considered as a basic requirement in many computer vision applications. Projective transformations constitute a general class which includes as special cases the affine, as well as the metric subclasses of transformations. In this paper the applicability of a recently proposed iterative algorithm, which uses the Enhanced Correlation Coefficient as a performance criterion, in the projective image registration problem is investigated. The main theoretical results concerning the proposed iterative algorithm are presented. Furthermore, the performance of the iterative algorithm in the presence of nonlinear photometric distortions is compared against the leading Lucas-Kanade algorithm and its simultaneous inverse compositional variant with the help of a series of experiments involving strong or weak geometric deformations, ideal and noisy conditions and even over-modelling of the warping process. Although under ideal conditions the proposed algorithm and simultaneous inverse compositional algorithm exhibit a similar performance and both outperform the Lucas-Kanade algorithm, under noisy conditions the proposed algorithm outperforms the other algorithms in convergence speed and accuracy, and exhibits robustness against photometric distortions.
Collapse
Affiliation(s)
- GEORGIOS D. EVANGELIDIS
- Department of Computer Engineering and Informatics, University of Patras, Rio, 26504, Greece
| | - EMMANOUIL Z. PSARAKIS
- Department of Computer Engineering and Informatics, University of Patras, Rio, 26504, Greece
| |
Collapse
|
46
|
Ou Y, Sotiras A, Paragios N, Davatzikos C. DRAMMS: Deformable registration via attribute matching and mutual-saliency weighting. Med Image Anal 2011; 15:622-39. [PMID: 20688559 PMCID: PMC3012150 DOI: 10.1016/j.media.2010.07.002] [Citation(s) in RCA: 250] [Impact Index Per Article: 19.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2009] [Revised: 06/19/2010] [Accepted: 07/06/2010] [Indexed: 11/18/2022]
Abstract
A general-purpose deformable registration algorithm referred to as "DRAMMS" is presented in this paper. DRAMMS bridges the gap between the traditional voxel-wise methods and landmark/feature-based methods with primarily two contributions. First, DRAMMS renders each voxel relatively distinctively identifiable by a rich set of attributes, therefore largely reducing matching ambiguities. In particular, a set of multi-scale and multi-orientation Gabor attributes are extracted and the optimal components are selected, so that they form a highly distinctive morphological signature reflecting the anatomical and geometric context around each voxel. Moreover, the way in which the optimal Gabor attributes are constructed is independent of the underlying image modalities or contents, which renders DRAMMS generally applicable to diverse registration tasks. A second contribution of DRAMMS is that it modulates the registration by assigning higher weights to those voxels having higher ability to establish unique (hence reliable) correspondences across images, therefore reducing the negative impact of those regions that are less capable of finding correspondences (such as outlier regions). A continuously-valued weighting function named "mutual-saliency" is developed to reflect the matching uniqueness between a pair of voxels implied by the tentative transformation. As a result, voxels do not contribute equally as in most voxel-wise methods, nor in isolation as in landmark/feature-based methods. Instead, they contribute according to the continuously-valued mutual-saliency map, which dynamically evolves during the registration process. Experiments in simulated images, inter-subject images, single-/multi-modality images, from brain, heart, and prostate have demonstrated the general applicability and the accuracy of DRAMMS.
Collapse
Affiliation(s)
- Yangming Ou
- Section of Biomedical Image Analysis, University of Pennsylvania, 3600 Market St., Ste 380, Philadelphia, PA 19104, USA.
| | | | | | | |
Collapse
|
47
|
Jagannathan S, Horn BKP, Ratilal P, Makris NC. Force estimation and prediction from time-varying density images. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2011; 33:1132-1146. [PMID: 20921583 DOI: 10.1109/tpami.2010.185] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
We present methods for estimating forces which drive motion observed in density image sequences. Using these forces, we also present methods for predicting velocity and density evolution. To do this, we formulate and apply a Minimum Energy Flow (MEF) method which is capable of estimating both incompressible and compressible flows from time-varying density images. Both the MEF and force-estimation techniques are applied to experimentally obtained density images, spanning spatial scales from micrometers to several kilometers. Using density image sequences describing cell splitting, for example, we show that cell division is driven by gradients in apparent pressure within a cell. Using density image sequences of fish shoals, we also quantify 1) intershoal dynamics such as coalescence of fish groups over tens of kilometers, 2) fish mass flow between different parts of a large shoal, and 3) the stresses acting on large fish shoals.
Collapse
Affiliation(s)
- Srinivasan Jagannathan
- Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA.
| | | | | | | |
Collapse
|
48
|
A novel robust kernel for visual learning problems. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2010.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
49
|
|
50
|
A Local Algorithm for the Computation of Image Velocity via Constructive Interference of Global Fourier Components. Int J Comput Vis 2010. [DOI: 10.1007/s11263-010-0402-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|