Fall detection is a well investigated research area, for which different solutions have been designed, based on wearable or ambient sensors. Depth sensors, like Kinect, located in front view with respect to the monitored subject, are able to provide the human skeleton through the automatic identification of body joints, and are typically used for their unobtrusiveness and inherent privacy-preserving capability. This paper aims to analyze depth signals captured from a Kinect used in top view, to extract useful features for the automatic identification of falls, despite the unavailability of joints and skeleton data. This study, based on a set of signals captured over a number of test users performing different types of falls and activities, shows that the speed of falling computed over the blob identifying the person, extracted from the depth images, should be used as a feature to spot fall events in conjunction with other metrics, for a better reliability.

Fall Detection with Kinect in Top View: Preliminary Features Analysis and Characterization / Spinsante, Susanna; Ricciuti, Manola; Cippitelli, Enea; Gambi, Ennio. - ELETTRONICO. - 233:(2018), pp. 153-162. [10.1007/978-3-319-76111-4_16]

Fall Detection with Kinect in Top View: Preliminary Features Analysis and Characterization

Spinsante, Susanna
Writing – Original Draft Preparation
;
Ricciuti, Manola
Software
;
Cippitelli, Enea
Methodology
;
Gambi, Ennio
Membro del Collaboration Group
2018-01-01

Abstract

Fall detection is a well investigated research area, for which different solutions have been designed, based on wearable or ambient sensors. Depth sensors, like Kinect, located in front view with respect to the monitored subject, are able to provide the human skeleton through the automatic identification of body joints, and are typically used for their unobtrusiveness and inherent privacy-preserving capability. This paper aims to analyze depth signals captured from a Kinect used in top view, to extract useful features for the automatic identification of falls, despite the unavailability of joints and skeleton data. This study, based on a set of signals captured over a number of test users performing different types of falls and activities, shows that the speed of falling computed over the blob identifying the person, extracted from the depth images, should be used as a feature to spot fall events in conjunction with other metrics, for a better reliability.
2018
Smart Objects and Technologies for Social Good. GOODTECHS 2017
978-3-319-76110-7
978-3-319-76111-4
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/255080
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact