In recent years, the enormous development of Machine Learning, especially Deep Learning, has led to the widespread adoption of Artificial Intelligence (AI) systems in a large variety of contexts. Many of these systems provide excellent results but act as black-boxes. This can be accepted in various contexts, but there are others (e.g., medical ones) where a result returned by a system cannot be accepted without an explanation on how it was obtained. Explainable AI (XAI) is an area of AI well suited to explain the behavior of AI systems that act as black-boxes. In this paper, we propose a model-agnostic XAI framework to explain the behavior of classifiers. Our framework is based on network theory; thus, it is able to make use of the enormous amount of results that researchers in this area have discovered over time. Being network-based, our framework is completely different from the other model-agnostic XAI approaches. Furthermore, it is parameter-free and is able to handle heterogeneous features that may not even be independent of each other. Finally, it introduces the notion of dyscrasia that allows us to detect not only which features are important in a particular task but also how they interact with each other.

A model-agnostic, network theory-based framework for supporting XAI on classifiers / Bonifazi, G.; Cauteruccio, F.; Corradini, E.; Marchetti, M.; Terracina, G.; Ursino, D.; Virgili, L.. - In: EXPERT SYSTEMS WITH APPLICATIONS. - ISSN 0957-4174. - 241:(2024). [10.1016/j.eswa.2023.122588]

A model-agnostic, network theory-based framework for supporting XAI on classifiers

G. Bonifazi;E. Corradini;M. Marchetti;D. Ursino
;
L. Virgili
2024-01-01

Abstract

In recent years, the enormous development of Machine Learning, especially Deep Learning, has led to the widespread adoption of Artificial Intelligence (AI) systems in a large variety of contexts. Many of these systems provide excellent results but act as black-boxes. This can be accepted in various contexts, but there are others (e.g., medical ones) where a result returned by a system cannot be accepted without an explanation on how it was obtained. Explainable AI (XAI) is an area of AI well suited to explain the behavior of AI systems that act as black-boxes. In this paper, we propose a model-agnostic XAI framework to explain the behavior of classifiers. Our framework is based on network theory; thus, it is able to make use of the enormous amount of results that researchers in this area have discovered over time. Being network-based, our framework is completely different from the other model-agnostic XAI approaches. Furthermore, it is parameter-free and is able to handle heterogeneous features that may not even be independent of each other. Finally, it introduces the notion of dyscrasia that allows us to detect not only which features are important in a particular task but also how they interact with each other.
2024
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0957417423030907-main.pdf

accesso aperto

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso: Creative commons
Dimensione 1.59 MB
Formato Adobe PDF
1.59 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11566/324431
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact