The use of Artificial Intelligence in Computer Graphics can be applied to video games to a great extent, from human-computer interaction to character animation. The development of increasingly complex environments and, consequently, ever increasing state-spaces, brought the necessity of new AI approaches. This is why Deep Reinforcement Learning is becoming widespread also in this domain, by enabling training of agents capable of out-performing humans. This work aims to develop a methodology to train intelligent agents, allowing them to perform the task of interacting and navigating through complex 2D environments, achieving different goals. Two platform video games have been examined: one is a level-based platformer, which provides a "static" environment, while the other is an endless-type video game, in which elements change randomly every game, making the environment more "dynamic". Different experiments have been performed, with different configuration settings; in both cases, trained agents showed good performance results, proving the effectiveness of the proposed method. In particular, in both scenarios the stable cumulative reward achieved corresponds to the highest value of all the trainings performed, and the policy and value loss obtained show really low values.
Deep Reinforced Navigation of Agents in 2D Platform Video Games / Balloni, E.; Mameli, M.; Mancini, A.; Zingaretti, P.. - 14497:(2024), pp. 288-308. [10.1007/978-3-031-50075-6_23]
Deep Reinforced Navigation of Agents in 2D Platform Video Games
Balloni E.;Mameli M.;Mancini A.;Zingaretti P.
2024-01-01
Abstract
The use of Artificial Intelligence in Computer Graphics can be applied to video games to a great extent, from human-computer interaction to character animation. The development of increasingly complex environments and, consequently, ever increasing state-spaces, brought the necessity of new AI approaches. This is why Deep Reinforcement Learning is becoming widespread also in this domain, by enabling training of agents capable of out-performing humans. This work aims to develop a methodology to train intelligent agents, allowing them to perform the task of interacting and navigating through complex 2D environments, achieving different goals. Two platform video games have been examined: one is a level-based platformer, which provides a "static" environment, while the other is an endless-type video game, in which elements change randomly every game, making the environment more "dynamic". Different experiments have been performed, with different configuration settings; in both cases, trained agents showed good performance results, proving the effectiveness of the proposed method. In particular, in both scenarios the stable cumulative reward achieved corresponds to the highest value of all the trainings performed, and the policy and value loss obtained show really low values.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.