Since the introduction of GPT-2, Large Language Models (LLMs) have proven to be able to handle various tasks with impressive performance. However, they sometimes generate incorrect output or even hallucinations. To overcome this problem, many researchers have investigated the possibility of integrating external factual knowledge, such as that encoded in Knowledge Graphs (KGs), into LLMs. Although there are many approaches in the existing literature that integrate KGs and LLMs in different ways, few of them use KGs to fine-tune LLMs, and none of them systematically use KG substructures. In this paper, we propose CoFine (Community-Based Fine-Tuner), an approach to fine-tune an LLM using the communities of a KG. CoFine works as follows: it first divides the KG into communities, each of which contains a homogeneous portion of the knowledge expressed by the KG. It then uses these communities to fine-tune the LLM. This way of proceeding allows LLM fine-tuning to focus on specific homogeneous information contained in the KG expressed by each community. CoFine allows the LLM to achieve a very high accuracy in knowledge completion tasks. This is evidenced by comparisons between CoFine and a baseline LLM fine-tuning approach, which showed that our approach achieves better results for all metrics considered with several KG.
Exploiting Knowledge Graph Communities to Fine-Tune Large Language Models / Amelio, A.; Buratti, C.; Marchetti, M.; Traini, D.; Ursino, D.; Virgili, L.. - In: EXPERT SYSTEMS WITH APPLICATIONS. - ISSN 0957-4174. - 298:Part C(2026). [10.1016/j.eswa.2025.129816]
Exploiting Knowledge Graph Communities to Fine-Tune Large Language Models
C. Buratti
;M. Marchetti
;D. Ursino
;L. Virgili
2026-01-01
Abstract
Since the introduction of GPT-2, Large Language Models (LLMs) have proven to be able to handle various tasks with impressive performance. However, they sometimes generate incorrect output or even hallucinations. To overcome this problem, many researchers have investigated the possibility of integrating external factual knowledge, such as that encoded in Knowledge Graphs (KGs), into LLMs. Although there are many approaches in the existing literature that integrate KGs and LLMs in different ways, few of them use KGs to fine-tune LLMs, and none of them systematically use KG substructures. In this paper, we propose CoFine (Community-Based Fine-Tuner), an approach to fine-tune an LLM using the communities of a KG. CoFine works as follows: it first divides the KG into communities, each of which contains a homogeneous portion of the knowledge expressed by the KG. It then uses these communities to fine-tune the LLM. This way of proceeding allows LLM fine-tuning to focus on specific homogeneous information contained in the KG expressed by each community. CoFine allows the LLM to achieve a very high accuracy in knowledge completion tasks. This is evidenced by comparisons between CoFine and a baseline LLM fine-tuning approach, which showed that our approach achieves better results for all metrics considered with several KG.| File | Dimensione | Formato | |
|---|---|---|---|
|
Amelio_Exploiting-Knowledge-Graph-Communities_2026.pdf
accesso aperto
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso:
Creative commons
Dimensione
1.92 MB
Formato
Adobe PDF
|
1.92 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


