The deployment of deep neural networks (DNNs) on resource-constrained edge devices necessitates efficient, low-complexity algorithms. Knowledge distillation (KD) addresses this through a student-teacher paradigm, transferring knowledge from complex teacher models to simpler student models. Current KD methods often optimize student performance without adequately addressing the reliability and interpretability of transferred knowledge, thus presenting challenges in maintaining both robustness and decision transparency. This paper introduces an Interpretability and Reliability-driven Knowledge Distillation (IR-KD) framework that enhances teacher model interpretability through perception-aligned gradients while leveraging hidden information from weak labels to optimize knowledge transfer. Our approach ensures compressed models remain computationally efficient while improving interpretability, which is essential for trustworthy edge AI deployment. We demonstrate improved predictive performance and model interpretability in non-intrusive load monitoring (NILM) applications as a case study. Quantitative explainability metrics confirm that perception-aligned gradients provide more faithful explanations, validating our approach's effectiveness in developing reliable and transparent edge AI systems.

Interpretability and reliability-driven knowledge distillation for non-intrusive load monitoring on the edge / Batic, Djordje; Tanoni, Giulia; Principi, Emanuele; Stankovic, Lina; Stankovic, Vladimir; Squartini, Stefano. - In: EXPERT SYSTEMS WITH APPLICATIONS. - ISSN 0957-4174. - 294:(2025). [10.1016/j.eswa.2025.128837]

Interpretability and reliability-driven knowledge distillation for non-intrusive load monitoring on the edge

Tanoni, Giulia;Principi, Emanuele;Squartini, Stefano
2025-01-01

Abstract

The deployment of deep neural networks (DNNs) on resource-constrained edge devices necessitates efficient, low-complexity algorithms. Knowledge distillation (KD) addresses this through a student-teacher paradigm, transferring knowledge from complex teacher models to simpler student models. Current KD methods often optimize student performance without adequately addressing the reliability and interpretability of transferred knowledge, thus presenting challenges in maintaining both robustness and decision transparency. This paper introduces an Interpretability and Reliability-driven Knowledge Distillation (IR-KD) framework that enhances teacher model interpretability through perception-aligned gradients while leveraging hidden information from weak labels to optimize knowledge transfer. Our approach ensures compressed models remain computationally efficient while improving interpretability, which is essential for trustworthy edge AI deployment. We demonstrate improved predictive performance and model interpretability in non-intrusive load monitoring (NILM) applications as a case study. Quantitative explainability metrics confirm that perception-aligned gradients provide more faithful explanations, validating our approach's effectiveness in developing reliable and transparent edge AI systems.
2025
Edge computing; Energy efficiency; Explainable artificial intelligence; Knowledge distillation; Non-intrusive load monitoring
File in questo prodotto:
File Dimensione Formato  
Batic_Interpretability-reliability-driven-knowledge_2025.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso: Creative commons
Dimensione 2.53 MB
Formato Adobe PDF
2.53 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/345998
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact