This paper introduces a self-correcting framework11The source code is available at: https://github.com/Pryce22/LLM-Prolog that converts natural language queries into executable Prolog code. The core of our approach is an iterative refinement loop that leverages structured error feedback from the SWI-Prolog engine. A key feature is our dynamic temperature scheduling mechanism, where the LLM's temperature is incrementally increased upon each failed attempt. This injection of adaptive stochasticity forces the model to escape deterministic failure modes and explore a wider solution space. By tightly integrating Prolog's logic, the system produces formally verifiable outputs, enhancing reasoning reliability in logically complex scenarios. Experimental results demonstrate the framework's value as a 'logical guardrail'. Although a base LLM may achieve higher average accuracy, our method excels in enhancing reliability where formal reasoning is critical. This is highlighted by its ability to recover correct solutions for a significant portion of the cases (9 out of 24) on which a powerful 72B parameter base model failed, showcasing its utility in applications demanding precision and consistency. The system is deployed in a chat interface, lowering barriers for non-experts while improving productivity for experts using logic programming. Its modular, adaptive design establishes a foundation for combining symbolic reasoning with LLMs, supporting the development of more trustworthy AI systems.
Bridging Large Language Models and Logic Programming / Crocetti, V.; Dragoni, A. F.. - (2025), pp. 1066-1071. ( 4th IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering, MetroXRAINE 2025 Ancona, IT 22 - 24 October 2025) [10.1109/MetroXRAINE66377.2025.11340308].
Bridging Large Language Models and Logic Programming
Crocetti V.;Dragoni A. F.
2025-01-01
Abstract
This paper introduces a self-correcting framework11The source code is available at: https://github.com/Pryce22/LLM-Prolog that converts natural language queries into executable Prolog code. The core of our approach is an iterative refinement loop that leverages structured error feedback from the SWI-Prolog engine. A key feature is our dynamic temperature scheduling mechanism, where the LLM's temperature is incrementally increased upon each failed attempt. This injection of adaptive stochasticity forces the model to escape deterministic failure modes and explore a wider solution space. By tightly integrating Prolog's logic, the system produces formally verifiable outputs, enhancing reasoning reliability in logically complex scenarios. Experimental results demonstrate the framework's value as a 'logical guardrail'. Although a base LLM may achieve higher average accuracy, our method excels in enhancing reliability where formal reasoning is critical. This is highlighted by its ability to recover correct solutions for a significant portion of the cases (9 out of 24) on which a powerful 72B parameter base model failed, showcasing its utility in applications demanding precision and consistency. The system is deployed in a chat interface, lowering barriers for non-experts while improving productivity for experts using logic programming. Its modular, adaptive design establishes a foundation for combining symbolic reasoning with LLMs, supporting the development of more trustworthy AI systems.| File | Dimensione | Formato | |
|---|---|---|---|
|
Crocetti_Bridging-Large-Language-Models_2025.pdf
Solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso:
Tutti i diritti riservati
Dimensione
339.82 kB
Formato
Adobe PDF
|
339.82 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
|
Bridging_Large_Language_Models_and_logic_programming-1.pdf
accesso aperto
Tipologia:
Documento in post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza d'uso:
Licenza specifica dell'editore
Dimensione
74.5 kB
Formato
Adobe PDF
|
74.5 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


