1
|
Miri Rekavandi A, Seghouane AK, Evans RJ. Learning Robust and Sparse Principal Components With the α-Divergence. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3441-3455. [PMID: 38801687 DOI: 10.1109/tip.2024.3403493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
In this paper, novel robust principal component analysis (RPCA) methods are proposed to exploit the local structure of datasets. The proposed methods are derived by minimizing the α -divergence between the sample distribution and the Gaussian density model. The α- divergence is used in different frameworks to represent variants of RPCA approaches including orthogonal, non-orthogonal, and sparse methods. We show that the classical PCA is a special case of our proposed methods where the α- divergence is reduced to the Kullback-Leibler (KL) divergence. It is shown in simulations that the proposed approaches recover the underlying principal components (PCs) by down-weighting the importance of structured and unstructured outliers. Furthermore, using simulated data, it is shown that the proposed methods can be applied to fMRI signal recovery and Foreground-Background (FB) separation in video analysis. Results on real world problems of FB separation as well as image reconstruction are also provided.
Collapse
|
2
|
Zhang F, Wang J, Wang W, Xu C. Low-Tubal-Rank Plus Sparse Tensor Recovery With Prior Subspace Information. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:3492-3507. [PMID: 32305896 DOI: 10.1109/tpami.2020.2986773] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Tensor principal component pursuit (TPCP) is a powerful approach in the tensor robust principal component analysis (TRPCA), where the goal is to decompose a data tensor to a low-tubal-rank part plus a sparse residual. TPCP is shown to be effective under certain tensor incoherence conditions, which can be restrictive in practice. In this paper, we propose a Modified-TPCP, which incorporates the prior subspace information in the analysis. With the aid of prior info, the proposed method is able to recover the low-tubal-rank and the sparse components under a significantly weaker incoherence assumption. We further design an efficient algorithm to implement Modified-TPCP based upon the alternating direction method of multipliers (ADMM). The promising performance of the proposed method is supported by simulations and real data applications.
Collapse
|
3
|
Na IS, Tran C, Nguyen D, Dinh S. Facial UV map completion for pose-invariant face recognition: a novel adversarial approach based on coupled attention residual UNets. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES 2020. [DOI: 10.1186/s13673-020-00250-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Abstract
Pose-invariant face recognition refers to the problem of identifying or verifying a person by analyzing face images captured from different poses. This problem is challenging due to the large variation of pose, illumination and facial expression. A promising approach to deal with pose variation is to fulfill incomplete UV maps extracted from in-the-wild faces, then attach the completed UV map to a fitted 3D mesh and finally generate different 2D faces of arbitrary poses. The synthesized faces increase the pose variation for training deep face recognition models and reduce the pose discrepancy during the testing phase. In this paper, we propose a novel generative model called Attention ResCUNet-GAN to improve the UV map completion. We enhance the original UV-GAN by using a couple of U-Nets. Particularly, the skip connections within each U-Net are boosted by attention gates. Meanwhile, the features from two U-Nets are fused with trainable scalar weights. The experiments on the popular benchmarks, including Multi-PIE, LFW, CPLWF and CFP datasets, show that the proposed method yields superior performance compared to other existing methods.
Collapse
|
4
|
Hu Z, Nie F, Wang R, Li X. Low Rank Regularization: A review. Neural Netw 2020; 136:218-232. [PMID: 33246711 DOI: 10.1016/j.neunet.2020.09.021] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2020] [Revised: 08/08/2020] [Accepted: 09/28/2020] [Indexed: 11/20/2022]
Abstract
Low Rank Regularization (LRR), in essence, involves introducing a low rank or approximately low rank assumption to target we aim to learn, which has achieved great success in many data analysis tasks. Over the last decade, much progress has been made in theories and applications. Nevertheless, the intersection between these two lines is rare. In order to construct a bridge between practical applications and theoretical studies, in this paper we provide a comprehensive survey for LRR. Specifically, we first review the recent advances in two issues that all LRR models are faced with: (1) rank-norm relaxation, which seeks to find a relaxation to replace the rank minimization problem; (2) model optimization, which seeks to use an efficient optimization algorithm to solve the relaxed LRR models. For the first issue, we provide a detailed summarization for various relaxation functions and conclude that the non-convex relaxations can alleviate the punishment bias problem compared with the convex relaxations. For the second issue, we summarize the representative optimization algorithms used in previous studies, and analyze their advantages and disadvantages. As the main goal of this paper is to promote the application of non-convex relaxations, we conduct extensive experiments to compare different relaxation functions. The experimental results demonstrate that the non-convex relaxations generally provide a large advantage over the convex relaxations. Such a result is inspiring for further improving the performance of existing LRR models.
Collapse
Affiliation(s)
- Zhanxuan Hu
- School of Computer Science, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China; Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China
| | - Feiping Nie
- School of Computer Science, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China; Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China
| | - Rong Wang
- School of Cybersecurity, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China; Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China.
| | - Xuelong Li
- School of Computer Science, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China; Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China
| |
Collapse
|