To tackle the re-identification challenges existing methods propose to directly match image features or to learn the transformation of features that undergoes between two cameras. Other methods learn optimal similarity measures. However, the performance of all these methods are strongly dependent from the person pose and orientation. We focus on this aspect and introduce three main contributions to the field: (i) to propose a method to extract multiple frames of the same person with different orientations in order to capture the complete person appearance; (ii) to learn the pairwise feature dissimilarities space (PFDS) formed by the subspaces of similar and different image pair orientations; and (iii) within each subspace, a classifier is trained to capture the multi-modal inter-camera transformation of pairwise image dissimilarities and to discriminate between positive and negative pairs. The experiments show the superior performance of the proposed approach with respect to state-of-the-art methods using two publicly available benchmark datasets. © 2016 Elsevier Inc. All rights reserved.

Modeling feature distances by orientation driven classifiers for person re-identification

MARTINEL, Niki;FORESTI, Gian Luca;MICHELONI, Christian
2016-01-01

Abstract

To tackle the re-identification challenges existing methods propose to directly match image features or to learn the transformation of features that undergoes between two cameras. Other methods learn optimal similarity measures. However, the performance of all these methods are strongly dependent from the person pose and orientation. We focus on this aspect and introduce three main contributions to the field: (i) to propose a method to extract multiple frames of the same person with different orientations in order to capture the complete person appearance; (ii) to learn the pairwise feature dissimilarities space (PFDS) formed by the subspaces of similar and different image pair orientations; and (iii) within each subspace, a classifier is trained to capture the multi-modal inter-camera transformation of pairwise image dissimilarities and to discriminate between positive and negative pairs. The experiments show the superior performance of the proposed approach with respect to state-of-the-art methods using two publicly available benchmark datasets. © 2016 Elsevier Inc. All rights reserved.
File in questo prodotto:
File Dimensione Formato  
JVC Modeling Feature Distances By Orientation Driven Classifier forperson Re-id.pdf

accesso aperto

Tipologia: Documento in Pre-print
Licenza: Creative commons
Dimensione 6.13 MB
Formato Adobe PDF
6.13 MB Adobe PDF Visualizza/Apri
1-s2.0-S1047320316000353-main.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: Non pubblico
Dimensione 3.04 MB
Formato Adobe PDF
3.04 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/1086765
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 28
  • ???jsp.display-item.citation.isi??? 24
social impact