Most of the open challenges in person re-identification arise from the large variations of human appearance and from the different camera views that may be involved, making pure feature matching an unreliable solution. To tackle these challenges state-of-the-art methods assume that a unique inter-camera transformation of features undergoes between two cameras. However, the combination of view points, scene illumination and photometric settings, etc., together with the appearance, pose and orientation of a person make the inter-camera transformation of features multi-modal. To address these challenges we introduce three main contributions. We propose a method to extract multiple frames of the same person with different orientation. We learn the pair wise feature dissimilarities space (PFDS) formed by the subspace of pair wise feature dissimilarities computed between images of persons with similar orientation and the subspace of pair wise feature dissimilarities computed between images of persons non-similar orientations. Finally, a classifier is trained to capture the multi-modal inter-camera transformation of pair wise images for each subspace. To validate the proposed approach we show the superior performance of our approach to state-of-the-art methods using two publicly available benchmark datasets. © 2014 IEEE.

Person Orientation and Feature Distances Boost Re-Identification

MARTINEL, Niki;FORESTI, Gian Luca;MICHELONI, Christian
2014-01-01

Abstract

Most of the open challenges in person re-identification arise from the large variations of human appearance and from the different camera views that may be involved, making pure feature matching an unreliable solution. To tackle these challenges state-of-the-art methods assume that a unique inter-camera transformation of features undergoes between two cameras. However, the combination of view points, scene illumination and photometric settings, etc., together with the appearance, pose and orientation of a person make the inter-camera transformation of features multi-modal. To address these challenges we introduce three main contributions. We propose a method to extract multiple frames of the same person with different orientation. We learn the pair wise feature dissimilarities space (PFDS) formed by the subspace of pair wise feature dissimilarities computed between images of persons with similar orientation and the subspace of pair wise feature dissimilarities computed between images of persons non-similar orientations. Finally, a classifier is trained to capture the multi-modal inter-camera transformation of pair wise images for each subspace. To validate the proposed approach we show the superior performance of our approach to state-of-the-art methods using two publicly available benchmark datasets. © 2014 IEEE.
2014
978-147995208-3
File in questo prodotto:
File Dimensione Formato  
paper.pdf

non disponibili

Descrizione: Articolo Principale
Tipologia: Documento in Post-print
Licenza: Non pubblico
Dimensione 4.29 MB
Formato Adobe PDF
4.29 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/1034764
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 15
social impact