The motor synthesis of humanoid characters is one of the main problems in data-driven animations, with applications in robotics, entertainment and game development. Both in the commercial and academic fields there is a strong interest in developing new synthesis techniques. The attention of researchers in this field has recently turned to artificial intelligence techniques with the use of neural networks, in particular recurrent neural networks (RNN) that are well suited to predict data sequences, such as animations. The use of RNNs for the generation of animations despite the high success in the scientific field and the excellent results in real development, still has limitations that prevent its use on a larger scale. In this work, therefore, the need to overcome these limits has required to focus on the phase following the generation of interactions by the network. In particular, starting from a work present in the literature, the limitations were analyzed and solutions were proposed that made it possible to improve the visual rendering of the Carry and Sit operations. The results obtained are positive and did not require any intervention on the neural network. New items and characters have been successfully introduced. Both pre-existing characters and imported characters are able to interact with all objects with greater effectiveness, responsiveness and visual fidelity.
Overcoming the Limits of a Neural Network for Character-Scene Interactions / Mameli, M.; De Carolis, D.; Frontoni, E.; Zingaretti, P.. - ELETTRONICO. - 12980:(2021), pp. 118-134. [10.1007/978-3-030-87595-4_10]
Overcoming the Limits of a Neural Network for Character-Scene Interactions
Mameli M.;De Carolis D.;Frontoni E.;Zingaretti P.
2021-01-01
Abstract
The motor synthesis of humanoid characters is one of the main problems in data-driven animations, with applications in robotics, entertainment and game development. Both in the commercial and academic fields there is a strong interest in developing new synthesis techniques. The attention of researchers in this field has recently turned to artificial intelligence techniques with the use of neural networks, in particular recurrent neural networks (RNN) that are well suited to predict data sequences, such as animations. The use of RNNs for the generation of animations despite the high success in the scientific field and the excellent results in real development, still has limitations that prevent its use on a larger scale. In this work, therefore, the need to overcome these limits has required to focus on the phase following the generation of interactions by the network. In particular, starting from a work present in the literature, the limitations were analyzed and solutions were proposed that made it possible to improve the visual rendering of the Carry and Sit operations. The results obtained are positive and did not require any intervention on the neural network. New items and characters have been successfully introduced. Both pre-existing characters and imported characters are able to interact with all objects with greater effectiveness, responsiveness and visual fidelity.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.