The massive adoption of social networks has madeavailable an unprecedented amount of user-generated content,which may be analyzed in order to determine people’s opinionsand emotions on a large variety of topics. Research has mademany efforts in defining accurate algorithms for the analysis ofemotions conveyed by texts, however their performance oftenrelies on the existence of large annotated datasets, whose currentscarcity represents a major issue. The manual creation of suchdatasets represents a costly and time-consuming activity andhence there is an increasing demand for techniques for theautomatic annotation of corpora. In this work we present amethodology for the automatic annotation of video subtitleson the basis of the analysis of facial expressions of peoplein videos, with the goal of creating annotated corpora thatmay be used to train emotion recognition algorithms. Facialexpressions are analyzed through machine learning algorithms,on the basis of a set of manually-engineered facial featuresthat are extracted from video frames. The soundness of theproposed methodology has been evaluated through an extensiveexperimentation aimed at determining the performance on realdatasets of each methodological step.

Automatic Annotation of Corpora For Emotion Recognition Through Facial Expressions Analysis / Diamantini, Claudia; Mircoli, Alex; Potena, Domenico; Storti, Emanuele. - (2021), pp. 5650-5657. (Intervento presentato al convegno 25th International Conference on Pattern Recognition (ICPR) tenutosi a Milano nel Jan 10-15, 2021).

Automatic Annotation of Corpora For Emotion Recognition Through Facial Expressions Analysis

Claudia Diamantini;Alex Mircoli;Domenico Potena;Emanuele Storti
2021-01-01

Abstract

The massive adoption of social networks has madeavailable an unprecedented amount of user-generated content,which may be analyzed in order to determine people’s opinionsand emotions on a large variety of topics. Research has mademany efforts in defining accurate algorithms for the analysis ofemotions conveyed by texts, however their performance oftenrelies on the existence of large annotated datasets, whose currentscarcity represents a major issue. The manual creation of suchdatasets represents a costly and time-consuming activity andhence there is an increasing demand for techniques for theautomatic annotation of corpora. In this work we present amethodology for the automatic annotation of video subtitleson the basis of the analysis of facial expressions of peoplein videos, with the goal of creating annotated corpora thatmay be used to train emotion recognition algorithms. Facialexpressions are analyzed through machine learning algorithms,on the basis of a set of manually-engineered facial featuresthat are extracted from video frames. The soundness of theproposed methodology has been evaluated through an extensiveexperimentation aimed at determining the performance on realdatasets of each methodological step.
2021
978-1-7281-8808-9
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/286673
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 1
social impact