Artificial sound event detection (SED) has the aim to mimic the human ability to perceive and understand what is happening in the surroundings. Nowadays, deep learning offers valuable techniques for this goal such as convolutional neural networks (CNNs). The capsule neural network (CapsNet) architecture has been recently introduced in the image processing field with the intent to overcome some of the known limitations of CNNs, specifically regarding the scarce robustness to affine transformations (i.e., perspective, size, orientation) and the detection of overlapped images. This motivated the authors to employ CapsNets to deal with the polyphonic SED task, in which multiple sound events occur simultaneously. Specifically, we propose to exploit the capsule units to represent a set of distinctive properties for each individual sound event. Capsule units are connected through a so-called dynamic routing that encourages learning part-whole relationships and improves the detection performance in a polyphonic context. This paper reports extensive evaluations carried out on three publicly available datasets, showing how the CapsNet-based algorithm not only outperforms standard CNNs but also allows to achieve the best results with respect to the state-of-the-art algorithms.
Polyphonic Sound Event Detection by using Capsule Neural Networks / Vesperini, Fabio; Gabrielli, Leonardo; Principi, Emanuele; Squartini, Stefano. - In: IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING. - ISSN 1932-4553. - ELETTRONICO. - (2019), pp. 1-1. [10.1109/JSTSP.2019.2902305]