Unifying
:
A Systematic Review and Framework
for the Evaluation of Explainable AI
This website accompanies our paper “Unifying VXAI: A Systematic Review and Framework for the Evaluation of Explainable AI”. The official publication can be found at TMLR's Open Review, or directly downloaded from this PDF link.
In the paper, we present a unified framework for the eValuation of XAI (VXAI), developed through a systematic literature review based on the PRISMA guidelines. We identify 362 relevant publications and group their contributions into 41 functionally similar metric sets.
To structure this landscape, we propose a three-dimensional categorization scheme spanning explanation type, evaluation contextuality, and explanation quality desiderata. This framework provides the most comprehensive overview of VXAI to date, supports systematic metric selection, and lays the foundation for future extensions.
How to cite
@article{dembinsky2026unifying,
title={Unifying {VXAI}: A Systematic Review and Framework for the Evaluation of Explainable {AI}},
author={David Dembinsky and Adriano Lucieri and Stanislav Frolov and Hiba Najjar and Ko Watanabe and Andreas Dengel},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2026},
url={https://openreview.net/forum?id=wAvFLe7o0E},
note={Survey Certification}
}Explore the Components
Terminology & Notation
A glossary of key terms and notational conventions used across our metric descriptions.
Categorization Scheme
Learn about the structured framework we use to classify VXAI metrics: contextuality levels, quality desiderata, and explanation types.
Helper Functions
Many VXAI metrics rely on configurable components such as preprocessing routines, perturbation strategies, and similarity or distance measures.
Metrics Overview
The table below lists all metrics described in our paper. You can filter by any of the categorization dimensions to narrow down relevant metrics. Selecting a metric will take you to its individual documentation page.
