The increasing complexity of decision-making in construction facility management demands intelligent systems capable of integrating human expertise with adaptive technologies. Traditional expert systems, while reliable and explainable, remain static and brittle, struggling to cope with dynamic, data-rich domains such as fire safety management where decisions must reconcile unstructured information, regulations, and life-critical constraints. On the contrary, Large Language Models (LLMs) offer remarkable capabilities for knowledge extraction and reasoning but lack the structural rigor, transparency, and accountability required for high-stake domains. This thesis proposes a hybrid human-AI knowledge engineering framework that combines the structured formalism of CommonKADS with the adaptive reasoning of LLMs through a novel Chain-of-Agents (CoA) methodology. The CoA framework redefines knowledge engineering as a continuous design process rather than a static model-building exercise. Instead of constructing a classical expert system, the approach generates an intelligent decision support system that dynamically adapts to specific cases. Drawing on Allen Newell’s Knowledge Level theory, the framework distinguishes between a meta-level (knowledge design) and an application-level (knowledge execution), enabling real-time co-evolution between problem formulation and solution generation. At the meta-level, under the supervision of human experts, LLMs carry out the knowledge design by sequentially transforming unstructured information (narratives, building layouts) into structured case-specific parameters and generates a CoA workflow execution for the decision support. Each output is validated by human experts, ensuring accountability, traceability, and domain alignment. At the application level, the validated chain of LLM-agents executes reasoning tasks autonomously, producing interpretable, domain-specific recommendations. This research applies the CoA framework to fire safety management, one of the most complex and safety-critical areas of facility management. Using case studies based on historical fire incidents reported by the National Fire Protection Association (NFPA) (including the Rhythm Club, Cocoanut Grove, and Beverly Hills Supper Club fires) the system demonstrates how LLMs can extract parameters, derive reasoning rules, and propose fire prevention and evacuation strategies from both text narratives and 2D layouts. Preliminary experiments were conducted using GPT-4.1, Gemini 2.5 Flash, and Ollama LLaMA-3 under one-shot, few-shot, and Chain-of-Thought prompting in two phases to understand how single agent performs under simple or complex tasks, and to select most suitable model for CoA executions. Then, selected model, GEMINI 2.5 Flash, run CoA workflows. Later, this agentic structure was compared with one-shot prompting for complex decision making. Results show that while simple one-shot prompting led to incomplete and generic outputs, the CoA-based multi-agent reasoning produced structured, transparent, and context-aware results. Quantitative and qualitative evaluations involving certified fire safety engineers confirmed that the hybrid CoA system significantly improved explainability, coherence, and technical soundness compared to single-agent LLM outputs. However, LLMs often exhibited overconfidence in evaluating their outputs, highlighting the necessity of human auditing in complex reasoning. Therefore, AI-in-the-loop mechanism enabled human experts to audit, refine, and correct LLM-generated knowledge iteratively, mitigating hallucinations and ensuring ethical accountability. By aligning LLM adaptability with structured knowledge engineering, the proposed framework achieves a good balance: explainable and responsible automation with artificial intelligence. In conclusion, this research presents a new epistemic model that approaches knowledge creation as a dynamic and iterative design process under human oversight and supported by artificial intelligence. This model advances facility management toward a knowledge-centric paradigm, enabling the creation of dynamic, situation-specific systems within minutes to support complex decision-making processes under uncertainty. Beyond fire safety, the proposed methodology establishes the theoretical and methodological foundation for the next generation of adaptive, explainable, and accountable knowledge-based systems across the built environment. Future research should investigate the alignment of the outputs with regulations and conduct experiments in different complex domains.
La crescente complessità dei processi decisionali nel Facility Management richiede sistemi intelligenti capaci di integrare l’esperienza umana con tecnologie adattive. I sistemi esperti tradizionali, pur affidabili ed esplicabili, restano statici e limitati, e faticano ad affrontare domini dinamici e ricchi di dati come la sicurezza antincendio, dove occorre conciliare informazioni non strutturate, vincoli normativi e di salvaguardia della vita. I Large Language Models (LLMs) offrono capacità rilevanti di estrazione e ragionamento, ma mancano della struttura e della trasparenza necessarie nei contesti ad alto rischio. Questa tesi propone un framework ibrido di knowledge engineering umano ed AI, che unisce il formalismo di CommonKADS al ragionamento adattativo degli LLM mediante la metodologia Chain-of-Agents (CoA). Il framework CoA interpreta il knowledge engineering come un processo di progettazione continuo. Invece di costruire un sistema esperto statico, esso genera un sistema decisionale che si adatta ai casi specifici. Basandosi sulla Knowledge Level Theory di Allen Newell, si distingue un meta-livello (progettazione della conoscenza) e un livello applicativo (esecuzione), permettendo una co-evoluzione tra formulazione del problema e risoluzione. Al meta-livello, sotto la supervisione umana, gli LLM trasformano informazioni non strutturate (come narrazioni e planimetrie) in parametri strutturati e poi producono il workflow CoA. Gli esperti validano ogni output, garantendo tracciabilità e coerenza. Al livello applicativo, la catena validata di agenti LLM esegue i compiti di ragionamento autonomo, generando raccomandazioni interpretabili. La ricerca applica il CoA alla sicurezza antincendio, uno dei settori più complessi della gestione del patrimonio costruito. Attraverso casi di studio basati su incidenti storici riportato dalla National Fire Protection Association (NFPA), il sistema mostra come gli LLM possano estrarre parametri, derivare regole e proporre strategie di prevenzione ed evacuazione a partire dalle informazioni presenti nei testi e planimetrie bidimensionali. Esperimenti preliminari eseguiti con GPT-4.1, Gemini 2.5 Flash e Ollama LLaMA-3 (one-shot, few-shot e Chain-of-Thought) hanno valutato le prestazioni dei modelli e identificato Gemini 2.5 Flash come il più adatto per le esecuzioni CoA. Infatti, il confronto tra CoA e prompt one-shot mostra che, mentre quest'ultimo ha prodotto output incompleti e generici, CoA ha generato risultati strutturati e sensibili al contesto. Le valutazioni effettuate da ingegneri qualificati nel settore della progettazione antincendio, confermano che il sistema CoA migliora spiegabilità, coerenza e solidità tecnica rispetto ai singoli LLM. Gli LLM mostrano tuttavia una tendenza all’eccessivo ottimismo, evidenziando la necessità di audit umani d coinvolgere neo ragionamento. Il meccanismo AI-in-the-loop permette infatti agli esperti di verificare e correggere iterativamente la conoscenza generata, riducendo allucinazioni e garantendo la presa di responsabilità. Combinando adattabilità degli LLM e rigore del knowledge engineering, il framework offre un’automazione affidabile ed esplicabile. In conclusione, la ricerca propone un modello epistemico che considera la creazione della conoscenza come un processo dinamico e iterativo, supervisionato dall’uomo e supportato dall’AI. Questo modello orienta il Facility Management verso un paradigma centrato sulla conoscenza, permettendo la generazione in pochi minuti di sistemi adattati al caso specifico. Sebbene validato sulla sicurezza antincendio degli edifici, la metodologia offre basi per la generazione di sistemi adattivi anche in altri ambiti dell’ambiente costruito, spiegabili e affidabili. Ricerche future potranno valutare l’allineamento agli standard e l’applicazione in altri domini di applicazione rappresentativi.
Large Language Models Powered Expert Systems for Decision Making in Facility Management / Durmus, Dilan. - (2026 Mar).
Large Language Models Powered Expert Systems for Decision Making in Facility Management
DURMUS, DILAN
2026-03-01
Abstract
The increasing complexity of decision-making in construction facility management demands intelligent systems capable of integrating human expertise with adaptive technologies. Traditional expert systems, while reliable and explainable, remain static and brittle, struggling to cope with dynamic, data-rich domains such as fire safety management where decisions must reconcile unstructured information, regulations, and life-critical constraints. On the contrary, Large Language Models (LLMs) offer remarkable capabilities for knowledge extraction and reasoning but lack the structural rigor, transparency, and accountability required for high-stake domains. This thesis proposes a hybrid human-AI knowledge engineering framework that combines the structured formalism of CommonKADS with the adaptive reasoning of LLMs through a novel Chain-of-Agents (CoA) methodology. The CoA framework redefines knowledge engineering as a continuous design process rather than a static model-building exercise. Instead of constructing a classical expert system, the approach generates an intelligent decision support system that dynamically adapts to specific cases. Drawing on Allen Newell’s Knowledge Level theory, the framework distinguishes between a meta-level (knowledge design) and an application-level (knowledge execution), enabling real-time co-evolution between problem formulation and solution generation. At the meta-level, under the supervision of human experts, LLMs carry out the knowledge design by sequentially transforming unstructured information (narratives, building layouts) into structured case-specific parameters and generates a CoA workflow execution for the decision support. Each output is validated by human experts, ensuring accountability, traceability, and domain alignment. At the application level, the validated chain of LLM-agents executes reasoning tasks autonomously, producing interpretable, domain-specific recommendations. This research applies the CoA framework to fire safety management, one of the most complex and safety-critical areas of facility management. Using case studies based on historical fire incidents reported by the National Fire Protection Association (NFPA) (including the Rhythm Club, Cocoanut Grove, and Beverly Hills Supper Club fires) the system demonstrates how LLMs can extract parameters, derive reasoning rules, and propose fire prevention and evacuation strategies from both text narratives and 2D layouts. Preliminary experiments were conducted using GPT-4.1, Gemini 2.5 Flash, and Ollama LLaMA-3 under one-shot, few-shot, and Chain-of-Thought prompting in two phases to understand how single agent performs under simple or complex tasks, and to select most suitable model for CoA executions. Then, selected model, GEMINI 2.5 Flash, run CoA workflows. Later, this agentic structure was compared with one-shot prompting for complex decision making. Results show that while simple one-shot prompting led to incomplete and generic outputs, the CoA-based multi-agent reasoning produced structured, transparent, and context-aware results. Quantitative and qualitative evaluations involving certified fire safety engineers confirmed that the hybrid CoA system significantly improved explainability, coherence, and technical soundness compared to single-agent LLM outputs. However, LLMs often exhibited overconfidence in evaluating their outputs, highlighting the necessity of human auditing in complex reasoning. Therefore, AI-in-the-loop mechanism enabled human experts to audit, refine, and correct LLM-generated knowledge iteratively, mitigating hallucinations and ensuring ethical accountability. By aligning LLM adaptability with structured knowledge engineering, the proposed framework achieves a good balance: explainable and responsible automation with artificial intelligence. In conclusion, this research presents a new epistemic model that approaches knowledge creation as a dynamic and iterative design process under human oversight and supported by artificial intelligence. This model advances facility management toward a knowledge-centric paradigm, enabling the creation of dynamic, situation-specific systems within minutes to support complex decision-making processes under uncertainty. Beyond fire safety, the proposed methodology establishes the theoretical and methodological foundation for the next generation of adaptive, explainable, and accountable knowledge-based systems across the built environment. Future research should investigate the alignment of the outputs with regulations and conduct experiments in different complex domains.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


