In the recent years several studies on population ageing in the most advanced countries argued that the share of people older than 65 years is steadily increasing. In order to tackle this phenomena, a significant effort has been devoted to the development of advanced technologies for supervising the domestic environments and their inhabitants to provide them assistance in their own home. In this context, the present paper aims to delineate a novel, highly-integrated system for advanced analysis of human behaviours. It is based on the fusion of the audio and vision frameworks, developed at the Multimedia Assistive Technology Laboratory (MATeLab) of the Università Politecnica delle Marche, in order to operate in the ambient assisted living context exploiting audio-visual domain features. The existing video framework exploits vertical RGB-D sensors for people tracking, interaction analysis and users activities detection in domestic scenarios. The depth information has been used to remove the affect of the appearance variation and to evaluate users activities inside the home and in front of the fixtures. In addition, group interactions are monitored and analysed. On the other side, the audio framework recognises voice commands by continuously monitoring the acoustic home environment. In addition, a hands-free communication to a relative or to a healthcare centre is automatically triggered when a distress call is detected. Echo and interference cancellation algorithms guarantee the high-quality communication and reliable speech recognition, respectively. The system we intend to delineate, thus, exploits multi-domain information, gathered from audio and video frameworks each, and stores them in a remote cloud for instant processing and analysis of the scene. Related actions are consequently performed.

Advanced Integration of Multimedia Assistive Technologies: a prospective outlook / Liciotti, Daniele; Ferroni, Giacomo; Frontoni, Emanuele; Squartini, Stefano; Principi, Emanuele; Bonfigli, Roberto; Zingaretti, Primo; Piazza, Francesco. - (2014). (Intervento presentato al convegno IEEE MESA 2014 - AAL Workshop tenutosi a Senigallia, Italy nel September 10 2014) [10.1109/MESA.2014.6935629].

Advanced Integration of Multimedia Assistive Technologies: a prospective outlook

LICIOTTI, Daniele;FERRONI, GIACOMO;FRONTONI, EMANUELE;SQUARTINI, Stefano;PRINCIPI, EMANUELE;Bonfigli, Roberto;ZINGARETTI, PRIMO;PIAZZA, Francesco
2014-01-01

Abstract

In the recent years several studies on population ageing in the most advanced countries argued that the share of people older than 65 years is steadily increasing. In order to tackle this phenomena, a significant effort has been devoted to the development of advanced technologies for supervising the domestic environments and their inhabitants to provide them assistance in their own home. In this context, the present paper aims to delineate a novel, highly-integrated system for advanced analysis of human behaviours. It is based on the fusion of the audio and vision frameworks, developed at the Multimedia Assistive Technology Laboratory (MATeLab) of the Università Politecnica delle Marche, in order to operate in the ambient assisted living context exploiting audio-visual domain features. The existing video framework exploits vertical RGB-D sensors for people tracking, interaction analysis and users activities detection in domestic scenarios. The depth information has been used to remove the affect of the appearance variation and to evaluate users activities inside the home and in front of the fixtures. In addition, group interactions are monitored and analysed. On the other side, the audio framework recognises voice commands by continuously monitoring the acoustic home environment. In addition, a hands-free communication to a relative or to a healthcare centre is automatically triggered when a distress call is detected. Echo and interference cancellation algorithms guarantee the high-quality communication and reliable speech recognition, respectively. The system we intend to delineate, thus, exploits multi-domain information, gathered from audio and video frameworks each, and stores them in a remote cloud for instant processing and analysis of the scene. Related actions are consequently performed.
2014
978-1-4799-2280-2
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/179906
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 0
social impact