Vehicle re-identification (re-id) is a challenging task due to the presence of high intra-class and low inter-class variations in the visual data acquired from monitoring camera networks. Unique and discriminative feature representations are needed to overcome the existence of several variations including color, illumination, orientation, background and occlusion. The orientations of the vehicles in the images make the learned models unable to learn multiple parts of the vehicle and relationship between them. The combination of global and partial features is one of the solutions to improve the discriminative learning of deep learning models. Leveraging on such solutions, we propose an Oriented Splits Network (OSN) for an end to end learning of multiple features along with global features to form a strong descriptor for vehicle re-identification. To capture the orientation variability of the vehicles, the proposed network introduces a partition of the images into several oriented stripes to obtain local descriptors for each part/region. Such a scheme is therefore exploited by a camera based feature distillation (CBD) training strategy to remove the background features. These are filtered out from oriented vehicles representations which yield to a much stronger unique representation of the vehicles. We perform experiments on two benchmark vehicle re-id datasets to verify the performance of the proposed approach which show that the proposed solution achieves better result with respect to the state of the art with margin.

Oriented Splits Network to Distill Background for Vehicle Re-Identification

Munir A.;Martinel N.;Micheloni C.
2021-01-01

Abstract

Vehicle re-identification (re-id) is a challenging task due to the presence of high intra-class and low inter-class variations in the visual data acquired from monitoring camera networks. Unique and discriminative feature representations are needed to overcome the existence of several variations including color, illumination, orientation, background and occlusion. The orientations of the vehicles in the images make the learned models unable to learn multiple parts of the vehicle and relationship between them. The combination of global and partial features is one of the solutions to improve the discriminative learning of deep learning models. Leveraging on such solutions, we propose an Oriented Splits Network (OSN) for an end to end learning of multiple features along with global features to form a strong descriptor for vehicle re-identification. To capture the orientation variability of the vehicles, the proposed network introduces a partition of the images into several oriented stripes to obtain local descriptors for each part/region. Such a scheme is therefore exploited by a camera based feature distillation (CBD) training strategy to remove the background features. These are filtered out from oriented vehicles representations which yield to a much stronger unique representation of the vehicles. We perform experiments on two benchmark vehicle re-id datasets to verify the performance of the proposed approach which show that the proposed solution achieves better result with respect to the state of the art with margin.
2021
978-1-6654-3396-9
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/1221294
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact