Multi-granularity graph pooling for video-based person re-identification.
Neural Netw 2023;
160:22-33. [PMID:
36592527 DOI:
10.1016/j.neunet.2022.12.015]
[Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 10/31/2022] [Accepted: 12/19/2022] [Indexed: 12/29/2022]
Abstract
The video-based person re-identification (ReID) aims to identify the given pedestrian video sequence across multiple non-overlapping cameras. To aggregate the temporal and spatial features of the video samples, the graph neural networks (GNNs) are introduced. However, existing graph-based models, like STGCN, perform the mean/max pooling on node features to obtain the graph representation, which neglect the graph topology and node importance. In this paper, we propose the graph pooling network (GPNet) to learn the multi-granularity graph representation for the video retrieval, where the graph pooling layer is implemented to downsample the graph. We construct a multi-granular graph by using node features learned from backbone, then implement multiple graph convolutional layers to perform the spatial and temporal aggregation on nodes. To downsample the graph, we propose a multi-head full attention graph pooling (MHFAPool) layer, which integrates the advantages of existing node clustering and node selection pooling methods. Specifically, MHFAPool first learns a full attention matrix for each pooled node, then obtains the principal eigenvector of the attention matrix via the power iteration algorithm, finally takes the softmax of the principal eigenvector as the aggregation coefficients. Extensive experiments demonstrate that our GPNet achieves the competitive results on four widely-used datasets, i.e., MARS, DukeMTMC-VideoReID, iLIDS-VID and PRID-2011.
Collapse