Nowadays, understanding and analysing visitors activities and behaviours is becoming imperative for personalising and improving the user experience in a museum environment. Users' behaviour can provide important statistics, insights and objective information about their interactions, such as attraction, attention and action. These data represent a precious value for the museum curators, and they are one of the parameters that need to be assessed. These information are collected through manual approaches based on questionnaires or visual observations. This procedure is time consuming and can be affected by the subjective interpretation of the evaluator. From such premises, SeSAME (Senseable Self Adapting Museum Environment) a novel system for collecting and analysing the behaviours of visitors inside a museum environment is presented in this paper. SeSAME is based on a multi-modal deep neural network architecture able to extract anthropometric and appearance features from RGB-D videos acquired in crowded environments. Our approach has been tested on four different temporal modelling methods to aggregate a sequence of image-level features into clip-level features. This paper uses as a benchmark TVPR2, a public dataset of acquired videos with an RGB-D camera in a top-view configuration, in the presence of persistent and temporarily heavy occlusion. Moreover, a dataset specifically collected for this work has been acquired in a real museum environment, which is Palazzo Buonaccorsi, an important historical building in Macerata, in Marche Region in the center of Italy. During the experimental phase, the evaluation metrics show the effectiveness and the suitability of the proposed method. (c) 2022 Elsevier B.V. All rights reserved.

SeSAME: Re-identification-based ambient intelligence system for museum environment / Paolanti, M; Pierdicca, R; Pietrini, R; Martini, M; Frontoni, E. - In: PATTERN RECOGNITION LETTERS. - ISSN 0167-8655. - 161:(2022), pp. 17-23. [10.1016/j.patrec.2022.07.011]

SeSAME: Re-identification-based ambient intelligence system for museum environment

Paolanti, M;Pierdicca, R;Pietrini, R;Martini, M;Frontoni, E
2022-01-01

Abstract

Nowadays, understanding and analysing visitors activities and behaviours is becoming imperative for personalising and improving the user experience in a museum environment. Users' behaviour can provide important statistics, insights and objective information about their interactions, such as attraction, attention and action. These data represent a precious value for the museum curators, and they are one of the parameters that need to be assessed. These information are collected through manual approaches based on questionnaires or visual observations. This procedure is time consuming and can be affected by the subjective interpretation of the evaluator. From such premises, SeSAME (Senseable Self Adapting Museum Environment) a novel system for collecting and analysing the behaviours of visitors inside a museum environment is presented in this paper. SeSAME is based on a multi-modal deep neural network architecture able to extract anthropometric and appearance features from RGB-D videos acquired in crowded environments. Our approach has been tested on four different temporal modelling methods to aggregate a sequence of image-level features into clip-level features. This paper uses as a benchmark TVPR2, a public dataset of acquired videos with an RGB-D camera in a top-view configuration, in the presence of persistent and temporarily heavy occlusion. Moreover, a dataset specifically collected for this work has been acquired in a real museum environment, which is Palazzo Buonaccorsi, an important historical building in Macerata, in Marche Region in the center of Italy. During the experimental phase, the evaluation metrics show the effectiveness and the suitability of the proposed method. (c) 2022 Elsevier B.V. All rights reserved.
2022
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/307044
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact