The interest in assistive technologies for supporting people at home is constantly increasing, both in academia and industry. In this context, the authors propose a fall classification system based on an innovative acoustic sensor that operates similarly to stethoscopes and captures the acoustic waves transmitted through the floor. The sensor is designed to minimize the impact of aerial sounds in recordings, thus allowing a more focused acoustic description of fall events. The audio signals acquired by means of the sensor are processed by a fall recognition algorithm based on Mel-Frequency Cepstral Coefficients, Supervectors and Support Vector Machines to discriminate among different types of fall events. The performance of the algorithm has been evaluated against a specific audio corpus comprising falls of a human mimicking doll and of everyday objects. The results showed that the floor sensor significantly improves the performance respect to an aerial microphone: in particular, the F1-Measure is 6.50% higher in clean conditions and 8.76% higher in mismatched noisy conditions. The proposed approach, thus, has a considerable advantage over aerial solutions since it is able to achieve higher fall classification performance using a simpler algorithmic pipeline and hardware setup.

Acoustic cues from the floor: A new approach for fall classification / Principi, Emanuele; Droghini, Diego; Squartini, Stefano; Olivetti, Paolo; Piazza, Francesco. - In: EXPERT SYSTEMS WITH APPLICATIONS. - ISSN 0957-4174. - STAMPA. - 60:(2016), pp. 51-61. [10.1016/j.eswa.2016.04.007]

Acoustic cues from the floor: A new approach for fall classification

PRINCIPI, EMANUELE
;
DROGHINI, DIEGO;SQUARTINI, Stefano;PIAZZA, Francesco
2016-01-01

Abstract

The interest in assistive technologies for supporting people at home is constantly increasing, both in academia and industry. In this context, the authors propose a fall classification system based on an innovative acoustic sensor that operates similarly to stethoscopes and captures the acoustic waves transmitted through the floor. The sensor is designed to minimize the impact of aerial sounds in recordings, thus allowing a more focused acoustic description of fall events. The audio signals acquired by means of the sensor are processed by a fall recognition algorithm based on Mel-Frequency Cepstral Coefficients, Supervectors and Support Vector Machines to discriminate among different types of fall events. The performance of the algorithm has been evaluated against a specific audio corpus comprising falls of a human mimicking doll and of everyday objects. The results showed that the floor sensor significantly improves the performance respect to an aerial microphone: in particular, the F1-Measure is 6.50% higher in clean conditions and 8.76% higher in mismatched noisy conditions. The proposed approach, thus, has a considerable advantage over aerial solutions since it is able to achieve higher fall classification performance using a simpler algorithmic pipeline and hardware setup.
2016
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/236004
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 36
  • ???jsp.display-item.citation.isi??? 33
social impact