Defining a process model is essential for understanding organizational workflows and guiding future activities. While process discovery techniques can derive models from event logs, designing a process model from scratch remains challenging, as it relies on human experts to interpret requirements. This work explores process model elicitation in the educational domain, where courses in a university program are treated as activities with prerequisite relationships. We propose a methodology leveraging Large Language Models to automatically infer causal dependencies among courses by analyzing their syllabi. Using pairwise prompting with step-back and chain-of-thought reasoning, the extracted dependencies are used to build a Directly-Follows Graph, subsequently evaluated by domain experts. Experimental results show that Large Language Models can effectively identify and justify course dependencies, providing valuable insights that can enhance curriculum design and provide students with clearer guidance on course sequencing. The approach also highlights gaps in prerequisite structures, suggesting a broader application in educational planning and dynamic learning environments.

Leveraging LLMs to Discover Causal Dependencies: A Case Study on a University Program / Diamantini, Claudia; Gobbi, Chiara; Mele, Alessandro; Potena, Domenico; Rossetti, Cristina. - 556 LNBIP:(2025), pp. 237-248. [10.1007/978-3-031-94931-9_19]

Leveraging LLMs to Discover Causal Dependencies: A Case Study on a University Program

Diamantini, Claudia;Gobbi, Chiara;Mele, Alessandro;Potena, Domenico;
2025-01-01

Abstract

Defining a process model is essential for understanding organizational workflows and guiding future activities. While process discovery techniques can derive models from event logs, designing a process model from scratch remains challenging, as it relies on human experts to interpret requirements. This work explores process model elicitation in the educational domain, where courses in a university program are treated as activities with prerequisite relationships. We propose a methodology leveraging Large Language Models to automatically infer causal dependencies among courses by analyzing their syllabi. Using pairwise prompting with step-back and chain-of-thought reasoning, the extracted dependencies are used to build a Directly-Follows Graph, subsequently evaluated by domain experts. Experimental results show that Large Language Models can effectively identify and justify course dependencies, providing valuable insights that can enhance curriculum design and provide students with clearer guidance on course sequencing. The approach also highlights gaps in prerequisite structures, suggesting a broader application in educational planning and dynamic learning environments.
2025
Advanced Information Systems Engineering Workshops
9783031949302
9783031949319
File in questo prodotto:
File Dimensione Formato  
2025___PMUD_CAiSE.pdf

embargo fino al 14/06/2026

Tipologia: Documento in post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza d'uso: Licenza specifica dell'editore
Dimensione 473.4 kB
Formato Adobe PDF
473.4 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/347836
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact