Recently, pedestrian detection using visible-thermal pairs plays a key role in around-the-clock applications, such as public surveillance and autonomous driving. However, the performance of a well-trained pedestrian detector may drop significantly when it is applied to a new scenario. Normally, to achieve a good performance on the new scenario, manual annotation of the dataset is necessary, while it is costly and unscalable. In this work, an unsupervised transfer learning framework is proposed for visible-thermal pedestrian detection tasks. Given well-trained detectors from a source dataset, the proposed framework utilizes an iterative process to generate and fuse training labels automatically, with the help of two auxiliary single-modality detectors (visible and thermal). To achieve label fusion, the knowledge of daytime and nighttime is adopted to assign priorities to labels according to their illumination, which improves the quality of generated training labels. After each iteration, the existing detectors are updated using new training labels. Experimental results demonstrate that the proposed method obtains state-of-the-art performance without any manual training labels on the target dataset.

Visible-Thermal Pedestrian Detection via Unsupervised Transfer Learning

Munir A.;Micheloni C.;
2021-01-01

Abstract

Recently, pedestrian detection using visible-thermal pairs plays a key role in around-the-clock applications, such as public surveillance and autonomous driving. However, the performance of a well-trained pedestrian detector may drop significantly when it is applied to a new scenario. Normally, to achieve a good performance on the new scenario, manual annotation of the dataset is necessary, while it is costly and unscalable. In this work, an unsupervised transfer learning framework is proposed for visible-thermal pedestrian detection tasks. Given well-trained detectors from a source dataset, the proposed framework utilizes an iterative process to generate and fuse training labels automatically, with the help of two auxiliary single-modality detectors (visible and thermal). To achieve label fusion, the knowledge of daytime and nighttime is adopted to assign priorities to labels according to their illumination, which improves the quality of generated training labels. After each iteration, the existing detectors are updated using new training labels. Experimental results demonstrate that the proposed method obtains state-of-the-art performance without any manual training labels on the target dataset.
2021
9781450388634
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/1212374
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact