1
|
A Hybrid Visual Tracking Algorithm Based on SOM Network and Correlation Filter. SENSORS 2021; 21:s21082864. [PMID: 33921720 PMCID: PMC8072667 DOI: 10.3390/s21082864] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 12/02/2022]
Abstract
To meet the challenge of video target tracking, based on a self-organization mapping network (SOM) and correlation filter, a long-term visual tracking algorithm is proposed. Objects in different videos or images often have completely different appearance, therefore, the self-organization mapping neural network with the characteristics of signal processing mechanism of human brain neurons is used to perform adaptive and unsupervised features learning. A reliable method of robust target tracking is proposed, based on multiple adaptive correlation filters with a memory function of target appearance at the same time. Filters in our method have different updating strategies and can carry out long-term tracking cooperatively. The first is the displacement filter, a kernelized correlation filter that combines contextual characteristics to precisely locate and track targets. Secondly, the scale filters are used to predict the changing scale of a target. Finally, the memory filter is used to maintain the appearance of the target in long-term memory and judge whether the target has failed to track. If the tracking fails, the incremental learning detector is used to recover the target tracking in the way of sliding window. Several experiments show that our method can effectively solve the tracking problems such as severe occlusion, target loss and scale change, and is superior to the state-of-the-art methods in the aspects of efficiency, accuracy and robustness.
Collapse
|
2
|
Learning Local-Global Multiple Correlation Filters for Robust Visual Tracking with Kalman Filter Redetection. SENSORS 2021; 21:s21041129. [PMID: 33562878 PMCID: PMC7915654 DOI: 10.3390/s21041129] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 01/25/2021] [Accepted: 02/01/2021] [Indexed: 11/16/2022]
Abstract
Visual object tracking is a significant technology for camera-based sensor networks applications. Multilayer convolutional features comprehensively used in correlation filter (CF)-based tracking algorithms have achieved excellent performance. However, there are tracking failures in some challenging situations because ordinary features are not able to well represent the object appearance variations and the correlation filters are updated irrationally. In this paper, we propose a local-global multiple correlation filters (LGCF) tracking algorithm for edge computing systems capturing moving targets, such as vehicles and pedestrians. First, we construct a global correlation filter model with deep convolutional features, and choose horizontal or vertical division according to the aspect ratio to build two local filters with hand-crafted features. Then, we propose a local-global collaborative strategy to exchange information between local and global correlation filters. This strategy can avoid the wrong learning of the object appearance model. Finally, we propose a time-space peak to sidelobe ratio (TSPSR) to evaluate the stability of the current CF. When the estimated results of the current CF are not reliable, the Kalman filter redetection (KFR) model would be enabled to recapture the object. The experimental results show that our presented algorithm achieves better performances on OTB-2013 and OTB-2015 compared with the other latest 12 tracking algorithms. Moreover, our algorithm handles various challenges in object tracking well.
Collapse
|
3
|
Robust Visual Tracking with Reliable Object Information and Kalman Filter. SENSORS 2021; 21:s21030889. [PMID: 33525624 PMCID: PMC7865692 DOI: 10.3390/s21030889] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 01/22/2021] [Accepted: 01/25/2021] [Indexed: 11/25/2022]
Abstract
Object information significantly affects the performance of visual tracking. However, it is difficult to obtain accurate target foreground information because of the existence of challenging scenarios, such as occlusion, background clutter, drastic change of appearance, and so forth. Traditional correlation filter methods roughly use linear interpolation to update the model, which may lead to the introduction of noise and the loss of reliable target information, resulting in the degradation of tracking performance. In this paper, we propose a novel robust visual tracking framework with reliable object information and Kalman filter (KF). Firstly, we analyze the reliability of the tracking process, calculate the confidence of the target information at the current estimated location, and determine whether it is necessary to carry out the online training and update step. Secondly, we also model the target motion between frames with a KF module, and use it to supplement the correlation filter estimation. Finally, in order to keep the most reliable target information of the first frame in the whole tracking process, we propose a new online training method, which can improve the robustness of the tracker. Extensive experiments on several benchmarks demonstrate the effectiveness and robustness of our proposed method, and our method achieves a comparable or better performance compared with several other state-of-the-art trackers.
Collapse
|
4
|
Efficient and Practical Correlation Filter Tracking. SENSORS 2021; 21:s21030790. [PMID: 33503940 PMCID: PMC7865341 DOI: 10.3390/s21030790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 01/05/2021] [Accepted: 01/20/2021] [Indexed: 12/04/2022]
Abstract
Visual tracking is a basic task in many applications. However, the heavy computation and low speed of many recent trackers limit their applications in some computing power restricted scenarios. On the other hand, the simple update scheme of most correlation filter-based trackers restricts their robustness during target deformation and occlusion. In this paper, we explore the update scheme of correlation filter-based trackers and propose an efficient and adaptive training sample update scheme. The training sample extracted in each frame is updated to the training set according to its distance between existing samples measured with a difference hashing algorithm or discarded according to tracking result reliability. In addition, we expand our new tracker to long-term tracking. On the basis of the proposed model updating mechanism, we propose a new tracking state discrimination mechanism to accurately judge tracking failure, and resume tracking after the target is recovered. Experiments on OTB-2015, Temple Color 128 and UAV123 (including UAV20L) demonstrate that our tracker performs favorably against state-of-the-art trackers with light computation and runs over 100 fps on desktop computer with Intel i7-8700 CPU(3.2 GHz).
Collapse
|
5
|
Robust Visual Tracking Based on Adaptive Multi-Feature Fusion Using the Tracking Reliability Criterion. SENSORS 2020; 20:s20247165. [PMID: 33327523 PMCID: PMC7764914 DOI: 10.3390/s20247165] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Revised: 12/08/2020] [Accepted: 12/12/2020] [Indexed: 11/16/2022]
Abstract
Multi-resolution feature fusion DCF (Discriminative Correlation Filter) methods have significantly advanced the object tracking performance. However, careless choice and fusion of sample features make the algorithm susceptible to interference, leading to tracking failure. Some trackers embed the re-detection module to remedy tracking failures, yet distinguishing ability and stability of the sample features are scarcely considered when training the detector, resulting in low effectiveness detection. Firstly, this paper proposes a criterion of feature tracking reliability and conduct a novel feature adaptive fusion framework. The feature tracking reliability criterion is proposed to evaluate the robustness and distinguishing ability of the sample features. Secondly, a re-detection module is proposed to further avoid tracking failures and increase the accuracy of target re-detection. The re-detection module consists of multiple SVM detectors trained by different sample features. When the tracking fails, the SVM detector trained by the most reliable sample feature will be activated to recover the target and adjust the target position. Finally, comparison experiments on OTB2015 and UAV123 databases demonstrate the accuracy and robustness of the proposed method.
Collapse
|
6
|
LPCF: Robust Correlation Tracking via Locality Preserving Tracking Validation. SENSORS 2020; 20:s20236853. [PMID: 33266108 PMCID: PMC7731162 DOI: 10.3390/s20236853] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 11/15/2020] [Accepted: 11/26/2020] [Indexed: 11/17/2022]
Abstract
In visual tracking, the tracking model must be updated online, which often leads to undesired inclusion of corrupted training samples, and hence inducing tracking failure. We present a locality preserving correlation filter (LPCF) integrating a novel and generic decontamination approach, which mitigates the model drift problem. Our decontamination approach maintains the local neighborhood feature points structures of the bounding box center. This proposed tracking-result validation approach models not only the spatial neighborhood relationship but also the topological structures of the bounding box center. Additionally, a closed-form solution to our approach is derived, which makes the tracking-result validation process could be accomplished in only milliseconds. Moreover, a dimensionality reduction strategy is introduced to improve the real-time performance of our translation estimation component. Comprehensive experiments are performed on OTB-2015, LASOT, TrackingNet. The experimental results show that our decontamination approach remarkably improves the overall performance by 6.2%, 12.6%, and 3%, meanwhile, our complete algorithm improves the baseline by 27.8%, 34.8%, and 15%. Finally, our tracker achieves the best performance among most existing decontamination trackers under the real-time requirement.
Collapse
|
7
|
SNS-CF: Siamese Network with Spatially Semantic Correlation Features for Object Tracking. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20174881. [PMID: 32872299 PMCID: PMC7506687 DOI: 10.3390/s20174881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 08/24/2020] [Accepted: 08/25/2020] [Indexed: 06/11/2023]
Abstract
Recent advances in object tracking based on deep Siamese networks shifted the attention away from correlation filters. However, the Siamese network alone does not have as high accuracy as state-of-the-art correlation filter-based trackers, whereas correlation filter-based trackers alone have a frame update problem. In this paper, we present a Siamese network with spatially semantic correlation features (SNS-CF) for accurate, robust object tracking. To deal with various types of features spread in many regions of the input image frame, the proposed SNS-CF consists of-(1) a Siamese feature extractor, (2) a spatially semantic feature extractor, and (3) an adaptive correlation filter. To the best of authors knowledge, the proposed SNS-CF is the first attempt to fuse the Siamese network and the correlation filter to provide high frame rate, real-time visual tracking with a favorable tracking performance to the state-of-the-art methods in multiple benchmarks.
Collapse
|
8
|
A Scale-Adaptive Matching Algorithm for Underwater Acoustic and Optical Images. SENSORS (BASEL, SWITZERLAND) 2020; 20:E4226. [PMID: 32751338 PMCID: PMC7435728 DOI: 10.3390/s20154226] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 07/05/2020] [Accepted: 07/27/2020] [Indexed: 12/02/2022]
Abstract
Underwater acoustic and optical data fusion has been developed in recent decades. Matching of underwater acoustic and optical images is a fundamental and critical problem in underwater exploration because it usually acts as the key step in many applications, such as target detection, ocean observation, and joint positioning. In this study, a method of matching the same underwater object in acoustic and optical images was designed, consisting of two steps. First, an enhancement step is used to enhance the images and ensure the accuracy of the matching results based on iterative processing and estimate similarity. The acoustic and optical images are first pre-processed with the aim of eliminating the influence of contrast degradation, contour blur, and image noise. A method for image enhancement was designed based on iterative processing. In addition, a new similarity estimation method for acoustic and optical images is also proposed to provide the enhancement effect. Second, a matching step is used to accurately find the corresponding object in the acoustic images that appears in the underwater optical images. In the matching process, a correlation filter is applied to determine the correlation for matching between images. Due to the differences of angle and imaging principle between underwater optical and acoustic images, there may be major differences of size between two images of the same object. In order to eliminate the effect of these differences, we introduce the Gaussian scale-space, which is fused with multi-scale detection to determine the matching results. Therefore, the algorithm is insensitive to scale differences. Extensive experiments demonstrate the effectiveness and accuracy of our proposed method in matching acoustic and optical images.
Collapse
|
9
|
Real-Time Object Tracking via Adaptive Correlation Filters. SENSORS 2020; 20:s20154124. [PMID: 32722140 PMCID: PMC7435421 DOI: 10.3390/s20154124] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Revised: 07/18/2020] [Accepted: 07/21/2020] [Indexed: 11/29/2022]
Abstract
Although correlation filter-based trackers (CFTs) have made great achievements on both robustness and accuracy, the performance of trackers can still be improved, because most of the existing trackers use either a sole filter template or fixed features fusion weight to represent a target. Herein, a real-time dual-template CFT for various challenge scenarios is proposed in this work. First, the color histograms, histogram of oriented gradient (HOG), and color naming (CN) features are extracted from the target image patch. Then, the dual-template is utilized based on the target response confidence. Meanwhile, in order to solve the various appearance variations in complicated challenge scenarios, the schemes of discriminative appearance model, multi-peaks target re-detection, and scale adaptive are integrated into the proposed tracker. Furthermore, the problem that the filter model may drift or even corrupt is solved by using high confidence template updating technique. In the experiment, 27 existing competitors, including 16 handcrafted features-based trackers (HFTs) and 11 deep features-based trackers (DFTs), are introduced for the comprehensive contrastive analysis on four benchmark databases. The experimental results demonstrate that the proposed tracker performs favorably against state-of-the-art HFTs and is comparable with the DFTs.
Collapse
|
10
|
Real-Time Visual Tracking with Variational Structure Attention Network. SENSORS 2019; 19:s19224904. [PMID: 31717609 PMCID: PMC6891527 DOI: 10.3390/s19224904] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 11/05/2019] [Accepted: 11/06/2019] [Indexed: 11/16/2022]
Abstract
Online training framework based on discriminative correlation filters for visual tracking has recently shown significant improvement in both accuracy and speed. However, correlation filter-base discriminative approaches have a common problem of tracking performance degradation when the local structure of a target is distorted by the boundary effect problem. The shape distortion of the target is mainly caused by the circulant structure in the Fourier domain processing, and it makes the correlation filter learn distorted training samples. In this paper, we present a structure-attention network to preserve the target structure from the structure distortion caused by the boundary effect. More specifically, we adopt a variational auto-encoder as a structure-attention network to make various and representative target structures. We also proposed two denoising criteria using a novel reconstruction loss for variational auto-encoding framework to capture more robust structures even under the boundary condition. Through the proposed structure-attention framework, discriminative correlation filters can learn robust structure information of targets during online training with an enhanced discriminating performance and adaptability. Experimental results on major visual tracking benchmark datasets show that the proposed method produces a better or comparable performance compared with the state-of-the-art tracking methods with a real-time processing speed of more than 80 frames per second.
Collapse
|
11
|
Robust Event-Based Object Tracking Combining Correlation Filter and CNN Representation. Front Neurorobot 2019; 13:82. [PMID: 31649524 PMCID: PMC6795673 DOI: 10.3389/fnbot.2019.00082] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Accepted: 09/20/2019] [Indexed: 11/13/2022] Open
Abstract
Object tracking based on the event-based camera or dynamic vision sensor (DVS) remains a challenging task due to the noise events, rapid change of event-stream shape, chaos of complex background textures, and occlusion. To address the challenges, this paper presents a robust event-stream object tracking method based on correlation filter mechanism and convolutional neural network (CNN) representation. In the proposed method, rate coding is used to encode the event-stream object. Feature representations from hierarchical convolutional layers of a pre-trained CNN are used to represent the appearance of the rate encoded event-stream object. Results prove that the proposed method not only achieves good tracking performance in many complicated scenes with noise events, complex background textures, occlusion, and intersected trajectories, but also is robust to variable scale, variable pose, and non-rigid deformations. In addition, the correlation filter-based method has the advantage of high speed. The proposed approach will promote the potential applications of these event-based vision sensors in autonomous driving, robots and many other high-speed scenes.
Collapse
|
12
|
Robust Visual Tracking Using Structural Patch Response Map Fusion Based on Complementary Correlation Filter and Color Histogram. SENSORS 2019; 19:s19194178. [PMID: 31561565 PMCID: PMC6806098 DOI: 10.3390/s19194178] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Revised: 09/18/2019] [Accepted: 09/23/2019] [Indexed: 11/21/2022]
Abstract
A part-based strategy has been applied to visual tracking with demonstrated success in recent years. Different from most existing part-based methods that only employ one type of tracking representation model, in this paper, we propose an effective complementary tracker based on structural patch response fusion under correlation filter and color histogram models. The proposed method includes two component trackers with complementary merits to adaptively handle illumination variation and deformation. To identify and take full advantage of reliable patches, we present an adaptive hedge algorithm to hedge the responses of patches into a more credible one in each component tracker. In addition, we design different loss metrics of tracked patches in two components to be applied in the proposed hedge algorithm. Finally, we selectively combine the two component trackers at the response maps level with different merging factors according to the confidence of each component tracker. Extensive experimental evaluations on OTB2013, OTB2015, and VOT2016 datasets show outstanding performance of the proposed algorithm contrasted with some state-of-the-art trackers.
Collapse
|
13
|
Improved Correlation Filter Tracking with Enhanced Features and Adaptive Kalman Filter. SENSORS (BASEL, SWITZERLAND) 2019; 19:s19071625. [PMID: 30987414 PMCID: PMC6479297 DOI: 10.3390/s19071625] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2019] [Revised: 03/27/2019] [Accepted: 04/01/2019] [Indexed: 06/09/2023]
Abstract
In the field of visual tracking, discriminative correlation filter (DCF)-based trackers have made remarkable achievements with their high computational efficiency. The crucial challenge that still remains is how to construct qualified samples without boundary effects and redetect occluded targets. In this paper a feature-enhanced discriminative correlation filter (FEDCF) tracker is proposed, which utilizes the color statistical model to strengthen the texture features (like the histograms of oriented gradient of HOG) and uses the spatial-prior function to suppress the boundary effects. Then, improved correlation filters using the enhanced features are built, the optimal functions of which can be effectively solved by Gauss-Seidel iteration. In addition, the average peak-response difference (APRD) is proposed to reflect the degree of target-occlusion according to the target response, and an adaptive Kalman filter is established to support the target redetection. The proposed tracker achieved a success plot performance of 67.8% with 5.1 fps on the standard datasets OTB2013.
Collapse
|
14
|
A Self-Selective Correlation Ship Tracking Method for Smart Ocean Systems. SENSORS 2019; 19:s19040821. [PMID: 30781563 PMCID: PMC6412977 DOI: 10.3390/s19040821] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 02/13/2019] [Accepted: 02/14/2019] [Indexed: 11/18/2022]
Abstract
In recent years, with the development of the marine industry, the ship navigation environment has become more complicated. Some artificial intelligence technologies, such as computer vision, can recognize, track and count sailing ships to ensure maritime security and facilitate management for Smart Ocean systems. Aiming at the scaling problem and boundary effect problem of traditional correlation filtering methods, we propose a self-selective correlation filtering method based on box regression (BRCF). The proposed method mainly includes: (1) A self-selective model with a negative samples mining method which effectively reduces the boundary effect in strengthening the classification ability of the classifier at the same time; (2) a bounding box regression method combined with a key points matching method for the scale prediction, leading to a fast and efficient calculation. The experimental results show that the proposed method can effectively deal with the problem of ship size changes and background interference. The success rates and precisions were over 8 % higher than Discriminative Scale Space Tracking (DSST) on the marine traffic dataset of our laboratory. In terms of processing speed, the proposed method is higher than DSST by nearly 22 frames per second (FPS).
Collapse
|
15
|
Object Tracking Algorithm Based on Dual Color Feature Fusion with Dimension Reduction. SENSORS (BASEL, SWITZERLAND) 2018; 19:s19010073. [PMID: 30585239 PMCID: PMC6338958 DOI: 10.3390/s19010073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 12/20/2018] [Accepted: 12/20/2018] [Indexed: 06/09/2023]
Abstract
Aiming at the problem of poor robustness and the low effectiveness of target tracking in complex scenes by using single color features, an object-tracking algorithm based on dual color feature fusion via dimension reduction is proposed, according to the Correlation Filter (CF)-based tracking framework. First, Color Name (CN) feature and Color Histogram (CH) feature extraction are respectively performed on the input image, and then the template and the candidate region are correlated by the CF-based methods, and the CH response and CN response of the target region are obtained, respectively. A self-adaptive feature fusion strategy is proposed to linearly fuse the CH response and the CN response to obtain a dual color feature response with global color distribution information and main color information. Finally, the position of the target is estimated, based on the fused response map, with the maximum of the fused response map corresponding to the estimated target position. The proposed method is based on fusion in the framework of the Staple algorithm, and dimension reduction by Principal Component Analysis (PCA) on the scale; the complexity of the algorithm is reduced, and the tracking performance is further improved. Experimental results on quantitative and qualitative evaluations on challenging benchmark sequences show that the proposed algorithm has better tracking accuracy and robustness than other state-of-the-art tracking algorithms in complex scenarios.
Collapse
|
16
|
Unmanned Aerial Vehicle Object Tracking by Correlation Filter with Adaptive Appearance Model. SENSORS 2018; 18:s18092751. [PMID: 30134621 PMCID: PMC6163504 DOI: 10.3390/s18092751] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Revised: 08/13/2018] [Accepted: 08/13/2018] [Indexed: 11/23/2022]
Abstract
With the increasing availability of low-cost, commercially available unmanned aerial vehicles (UAVs), visual tracking using UAVs has become more and more important due to its many new applications, including automatic navigation, obstacle avoidance, traffic monitoring, search and rescue, etc. However, real-world aerial tracking poses many challenges due to platform motion and image instability, such as aspect ratio change, viewpoint change, fast motion, scale variation and so on. In this paper, an efficient object tracking method for UAV videos is proposed to tackle these challenges. We construct the fused features to capture the gradient information and color characteristics simultaneously. Furthermore, cellular automata is introduced to update the appearance template of target accurately and sparsely. In particular, a high confidence model updating strategy is developed according to the stability function. Systematic comparative evaluations performed on the popular UAV123 dataset show the efficiency of the proposed approach.
Collapse
|
17
|
Online Model Updating and Dynamic Learning Rate-Based Robust Object Tracking. SENSORS 2018; 18:s18072046. [PMID: 29949950 PMCID: PMC6068913 DOI: 10.3390/s18072046] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Revised: 06/08/2018] [Accepted: 06/25/2018] [Indexed: 11/26/2022]
Abstract
Robust visual tracking is a significant and challenging issue in computer vision-related research fields and has attracted an immense amount of attention from researchers. Due to various practical applications, many studies have been done that have introduced numerous algorithms. It is considered to be a challenging problem due to the unpredictability of various real-time situations, such as illumination variations, occlusion, fast motion, deformation, and scale variation, even though we only know the initial target position. To address these matters, we used a kernelized-correlation-filter-based translation filter with the integration of multiple features such as histogram of oriented gradients (HOG) and color attributes. These powerful features are useful to differentiate the target from the surrounding background and are effective for motion blur and illumination variations. To minimize the scale variation problem, we designed a correlation-filter-based scale filter. The proposed adaptive model’s updating and dynamic learning rate strategies based on a peak-to-sidelobe ratio effectively reduce model-drifting problems by avoiding noisy appearance changes. The experiment results show that our method provides the best performance compared to other methods, with a distance precision score of 79.9%, overlap success score of 59.0%, and an average running speed of 74 frames per second on the object tracking benchmark (OTB-2015).
Collapse
|
18
|
Multi-Object Tracking with Correlation Filter for Autonomous Vehicle. SENSORS 2018; 18:s18072004. [PMID: 29932136 PMCID: PMC6068606 DOI: 10.3390/s18072004] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2018] [Revised: 06/17/2018] [Accepted: 06/18/2018] [Indexed: 11/24/2022]
Abstract
Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.
Collapse
|
19
|
Efficient Two-Pass 3-D Speckle Tracking for Ultrasound Imaging. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2018; 6:17415-17428. [PMID: 30740286 PMCID: PMC6365000 DOI: 10.1109/access.2018.2815522] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Speckle tracking based on block matching is the most common method for multi-dimensional motion estimation in ultrasound elasticity imaging. Extension of two-dimensional (2-D) methods to three dimensions (3-D) has been problematic because of the large computational load of 3-D tracking, as well as performance issues related to the low frame (volume) rates of 3-D images. To address both of these problems, we have developed an efficient two-pass tracking method suited to cardiac elasticity imaging. PatchMatch, originally developed for image editing, has been adapted for ultrasound to provide first-pass displacement estimates. Second-pass estimation uses conventional block matching within a much smaller search region. 3-D displacements are then obtained using correlation filtering previously shown to be effective against speckle decorrelation. Both simulated and in vivo canine cardiac results demonstrate that the proposed two-pass method reduces computational cost compared to conventional 3-D exhaustive search by a factor of 10. Moreover, it outperforms one-pass tracking by a factor of about 3 in terms of root-mean-square error relative to available ground-truth displacements.
Collapse
|
20
|
Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking. SENSORS 2017; 17:s17122889. [PMID: 29231876 PMCID: PMC5750837 DOI: 10.3390/s17122889] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Revised: 12/04/2017] [Accepted: 12/11/2017] [Indexed: 11/16/2022]
Abstract
Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs).
Collapse
|
21
|
Robust Scale Adaptive Tracking by Combining Correlation Filters with Sequential Monte Carlo. SENSORS 2017; 17:s17030512. [PMID: 28273840 PMCID: PMC5375798 DOI: 10.3390/s17030512] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2017] [Revised: 02/25/2017] [Accepted: 02/27/2017] [Indexed: 11/16/2022]
Abstract
A robust and efficient object tracking algorithm is required in a variety of computer vision applications. Although various modern trackers have impressive performance, some challenges such as occlusion and target scale variation are still intractable, especially in the complex scenarios. This paper proposes a robust scale adaptive tracking algorithm to predict target scale by a sequential Monte Carlo method and determine the target location by the correlation filter simultaneously. By analyzing the response map of the target region, the completeness of the target can be measured by the peak-to-sidelobe rate (PSR), i.e., the lower the PSR, the more likely the target is being occluded. A strict template update strategy is designed to accommodate the appearance change and avoid template corruption. If the occlusion occurs, a retained scheme is allowed and the tracker refrains from drifting away. Additionally, the feature integration is incorporated to guarantee the robustness of the proposed approach. The experimental results show that our method outperforms other state-of-the-art trackers in terms of both the distance precision and overlap precision on the publicly available TB-50 dataset.
Collapse
|
22
|
Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters. SENSORS 2017; 17:s17030433. [PMID: 28241475 PMCID: PMC5375719 DOI: 10.3390/s17030433] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Revised: 02/13/2017] [Accepted: 02/17/2017] [Indexed: 11/25/2022]
Abstract
Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved.
Collapse
|
23
|
Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters. SENSORS 2016; 16:s16091443. [PMID: 27618046 PMCID: PMC5038721 DOI: 10.3390/s16091443] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2016] [Revised: 08/15/2016] [Accepted: 08/17/2016] [Indexed: 11/30/2022]
Abstract
Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.
Collapse
|
24
|
Tuning to optimize SVM approach for assisting ovarian cancer diagnosis with photoacoustic imaging. Biomed Mater Eng 2016; 26 Suppl 1:S975-81. [PMID: 26406101 DOI: 10.3233/bme-151392] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Support vector machine (SVM) is one of the most effective classification methods for cancer detection. The efficiency and quality of a SVM classifier depends strongly on several important features and a set of proper parameters. Here, a series of classification analyses, with one set of photoacoustic data from ovarian tissues ex vivo and a widely used breast cancer dataset- the Wisconsin Diagnostic Breast Cancer (WDBC), revealed the different accuracy of a SVM classification in terms of the number of features used and the parameters selected. A pattern recognition system is proposed by means of SVM-Recursive Feature Elimination (RFE) with the Radial Basis Function (RBF) kernel. To improve the effectiveness and robustness of the system, an optimized tuning ensemble algorithm called as SVM-RFE(C) with correlation filter was implemented to quantify feature and parameter information based on cross validation. The proposed algorithm is first demonstrated outperforming SVM-RFE on WDBC. Then the best accuracy of 94.643% and sensitivity of 94.595% were achieved when using SVM-RFE(C) to test 57 new PAT data from 19 patients. The experiment results show that the classifier constructed with SVM-RFE(C) algorithm is able to learn additional information from new data and has significant potential in ovarian cancer diagnosis.
Collapse
|