Knowledge distillation (KD) is a machine learning technique widely used in recent years for the task of domain adaptation and complexity reduction. It relies on a Student-Teacher mechanism to transfer the knowledge of a large and complex Teacher network into a smaller Student model. Given the inherent complexity of large Deep Neural Network (DNN) models, and the need for deployment on edge devices with limited resources, complexity reduction techniques have become a hot topic in the Non-intrusive Load Monitoring (NILM) community. Recent literature in NILM has devoted increased effort to domain adaptation and architecture reduction via KD. However, the mechanism behind the transfer of knowledge from the Teacher to the Student is not clearly understood. In this work, we aim to address the aforementioned issue by placing the KD NILM approach in a framework of explainable AI (XAI). We identify the main inconsistency in the transfer of explainable knowledge, and exploit this information to propose a method for improvement of KD through explainability guided learning. We evaluate our approach on a variety of appliances and domain adaptation scenarios and demonstrate that solving inconsistencies in the transfer of explainable knowledge can lead to improvement in predictive performance.

Improving knowledge distillation for non-intrusive load monitoring through explainability guided learning / Batic, Djordje; Tanoni, Giulia; Stankovic, Lina; Stankovic, Vladimir; Principi, Emanuele. - (2023). (Intervento presentato al convegno 48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023 tenutosi a Rhodes Island nel 4-10 June 2023) [10.1109/ICASSP49357.2023.10095109].

Improving knowledge distillation for non-intrusive load monitoring through explainability guided learning

Tanoni, Giulia
Secondo
;
Principi, Emanuele
Ultimo
2023-01-01

Abstract

Knowledge distillation (KD) is a machine learning technique widely used in recent years for the task of domain adaptation and complexity reduction. It relies on a Student-Teacher mechanism to transfer the knowledge of a large and complex Teacher network into a smaller Student model. Given the inherent complexity of large Deep Neural Network (DNN) models, and the need for deployment on edge devices with limited resources, complexity reduction techniques have become a hot topic in the Non-intrusive Load Monitoring (NILM) community. Recent literature in NILM has devoted increased effort to domain adaptation and architecture reduction via KD. However, the mechanism behind the transfer of knowledge from the Teacher to the Student is not clearly understood. In this work, we aim to address the aforementioned issue by placing the KD NILM approach in a framework of explainable AI (XAI). We identify the main inconsistency in the transfer of explainable knowledge, and exploit this information to propose a method for improvement of KD through explainability guided learning. We evaluate our approach on a variety of appliances and domain adaptation scenarios and demonstrate that solving inconsistencies in the transfer of explainable knowledge can lead to improvement in predictive performance.
2023
978-1-7281-6327-7
File in questo prodotto:
File Dimensione Formato  
Batic_etal_ICASSP_2023_Improving_knowledge_distillation_for_non_intrusive_load_monitoring.pdf

accesso aperto

Descrizione: © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Tipologia: Documento in post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza d'uso: Tutti i diritti riservati
Dimensione 211.91 kB
Formato Adobe PDF
211.91 kB Adobe PDF Visualizza/Apri
Improving_Knowledge_Distillation_for_Non-Intrusive_Load_Monitoring_VoR_2023.pdf

Solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso: Tutti i diritti riservati
Dimensione 995.26 kB
Formato Adobe PDF
995.26 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/325453
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact