Person re-identification (re-ID) is currently a notably topic in the computer vision and pattern recognition communities. However, most of the existing works on re-ID have been designed for closed world scenarios, rather than more realistic open world scenarios, limiting the practical application of these re-ID techniques. In a common real-world application, a watch-list of known people is given as the gallery/target set for searching through a large volume of videos where the people on the watch-list are likely to return. This aspect is fundamental in retail for understanding how customers schedule their shopping. The identification of regular and occasional customers allows to define temporal purchasing profiles, which can put in correlation the customers' temporal habits with other information such as the amount of expenditure and number of purchased items. This paper presents the first attempt to solve a more realistic re-ID setting, designed to face these important issues called Top-View Open-World (TVOW) person re-id. The approach is based on a pretrained Deep Convolutional neural Network (DCNN), finetuned on a dataset acquired by using a top-view configuration. A special loss function called triplet loss was used to train the network. The triplet loss optimizes the embedding space such that data points with the same identity are closer to each other than those with different identities. The TVOW is evaluated on the TVPR2 dataset for people re-ID that is publicly available. The experimental results show that the proposed methods significantly outperform all competitive state-of-the-art methods, bringing to different and significative insights for implicit and extensive shopper behaviour analysis for marketing applications.

Open-world person re-identification with RGBD camera in top-view configuration for retail applications / Martini, M.; Paolanti, M.; Frontoni, E.. - In: IEEE ACCESS. - ISSN 2169-3536. - 8:(2020), pp. 67756-67765. [10.1109/ACCESS.2020.2985985]

Open-world person re-identification with RGBD camera in top-view configuration for retail applications

Martini M.
Membro del Collaboration Group
;
Paolanti M.;Frontoni E.
2020-01-01

Abstract

Person re-identification (re-ID) is currently a notably topic in the computer vision and pattern recognition communities. However, most of the existing works on re-ID have been designed for closed world scenarios, rather than more realistic open world scenarios, limiting the practical application of these re-ID techniques. In a common real-world application, a watch-list of known people is given as the gallery/target set for searching through a large volume of videos where the people on the watch-list are likely to return. This aspect is fundamental in retail for understanding how customers schedule their shopping. The identification of regular and occasional customers allows to define temporal purchasing profiles, which can put in correlation the customers' temporal habits with other information such as the amount of expenditure and number of purchased items. This paper presents the first attempt to solve a more realistic re-ID setting, designed to face these important issues called Top-View Open-World (TVOW) person re-id. The approach is based on a pretrained Deep Convolutional neural Network (DCNN), finetuned on a dataset acquired by using a top-view configuration. A special loss function called triplet loss was used to train the network. The triplet loss optimizes the embedding space such that data points with the same identity are closer to each other than those with different identities. The TVOW is evaluated on the TVPR2 dataset for people re-ID that is publicly available. The experimental results show that the proposed methods significantly outperform all competitive state-of-the-art methods, bringing to different and significative insights for implicit and extensive shopper behaviour analysis for marketing applications.
2020
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/281504
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 15
  • ???jsp.display-item.citation.isi??? 10
social impact