Zeng S, Zhang B, Gou J, Xu Y. Regularization on Augmented Data to Diversify Sparse Representation for Robust Image Classification.
IEEE TRANSACTIONS ON CYBERNETICS 2022;
52:4935-4948. [PMID:
33085628 DOI:
10.1109/tcyb.2020.3025757]
[Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Image classification is a fundamental component in modern computer vision systems, where sparse representation-based classification has drawn a lot of attention due to its robustness. However, on the optimization of sparse learning systems, regularization and data augmentation are both powerful, but currently isolated. We believe that regularization and data augmentation can cooperate to generate a breakthrough in robust image classification. In this article, we propose a novel framework, regularization on augmented data (READ), which creates diversification in the data using the generic augmentation techniques to implement robust sparse representation-based image classification. When the training data are augmented, READ applies a distinct regularizer, l1 or l2 , in particular, on the augmented training data apart from the original data, so that regularization and data augmentation are utilized and enhanced synchronously. We introduce an elaborate theoretical analysis on how to optimize the sparse representation by both l1 -norm and l2 -norm with the generic data augmentation and demonstrate its performance in extensive experiments. The results obtained on several facial and object datasets show that READ outperforms many state-of-the-art methods when using deep features.
Collapse