Nowadays, the detection of human fall is a problem recognized by the entire scientific community. Methods that have good performance use human falls samples in the train set, while methods that do not use it, can only work well under certain conditions. Since examples of human falls are very difficult to retrieve, there is a strong need to develop systems that can work well event with few or no data to be used for their training phase. In this article, we show a first study on few-shot learning Siamese Neural Network applied to human falls detection by using audio signals. This method has been compared with algorithms based on SVM and OCSVM, all evaluated starting from the same conditions. The proposed approach is able to learn the differences between signals belonging to different classes of events. In classification phase, using only one human fall signal as a template, it achieves about 80% of F1 -Measure related to the human fall class, while the SVM based method gets around 69%, when it is trained in the same data knowledge conditions.
Few-Shot Siamese Neural Networks Employing Audio Features for Human-Fall Detection / Droghini, Diego; Vesperini, Fabio; Principi, Emanuele; Squartini, Stefano; Piazza, Francesco. - (2018), pp. 63-69. (Intervento presentato al convegno PRAI 2018 tenutosi a Union, NJ, USA nel August 15 - 17, 2018) [10.1145/3243250.3243268].