This paper introduces a neuro-symbolic framework designed to predict and explain subsequent facts from current observations. Facts are generated through causal relationships, which can be modeled by a set of propositional logic rules representing the domain knowledge. However, these rules remain unknown to the agent. By observing the facts, the agent constructs an approximation of them, which is then used to predict and explain new facts. The proposed framework can learn and adapt to different environments modeled by various forms of logic programs, also handling negation and recursion. Most notably, it can handle dynamic environments whose structure evolves over time. In these scenarios, the agent modifies its understanding of the environment to capture new observations, guaranteeing that its model of the domain knowledge remains up-to-date. To achieve this goal, our approach leverages the A2C (Advantage Actor-Critic) reinforcement learning algorithm. This choice allows us to integrate reinforcement learning principles into our logic framework. Through this research, we aspire to contribute to the development of explainable neuro-symbolic Artificial Intelligence systems in dynamic environments.
Reinforcement Learning Meets Logic Programming : Towards Explainable AI / Caroprese, L.; Zumpano, E.; Ursino, D.. - 16093:(2026), pp. 13-27. ( 19th European Conference on Logics in Artificial Intelligence, JELIA 2025 Kutaisi, Georgia 1 - 4 September 2025) [10.1007/978-3-032-04587-4_2].
Reinforcement Learning Meets Logic Programming : Towards Explainable AI
D. Ursino
2026-01-01
Abstract
This paper introduces a neuro-symbolic framework designed to predict and explain subsequent facts from current observations. Facts are generated through causal relationships, which can be modeled by a set of propositional logic rules representing the domain knowledge. However, these rules remain unknown to the agent. By observing the facts, the agent constructs an approximation of them, which is then used to predict and explain new facts. The proposed framework can learn and adapt to different environments modeled by various forms of logic programs, also handling negation and recursion. Most notably, it can handle dynamic environments whose structure evolves over time. In these scenarios, the agent modifies its understanding of the environment to capture new observations, guaranteeing that its model of the domain knowledge remains up-to-date. To achieve this goal, our approach leverages the A2C (Advantage Actor-Critic) reinforcement learning algorithm. This choice allows us to integrate reinforcement learning principles into our logic framework. Through this research, we aspire to contribute to the development of explainable neuro-symbolic Artificial Intelligence systems in dynamic environments.| File | Dimensione | Formato | |
|---|---|---|---|
|
Caroprese_Reinforcement-Learning-Meets-Logic_2026.pdf
Solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso:
Tutti i diritti riservati
Dimensione
3.31 MB
Formato
Adobe PDF
|
3.31 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


