Terminology & Notation
General Terminology
We follow the terminology of Palacio et al. (2021), distinguishing between the subject of explanation, the explanation process, and the output that conveys information to the user. These concepts are foundational to how we define and evaluate explainability. Throughout this site, we use the Latin plural forms: explananda and explanantia. When we refer to VXAI, we include both the explanation method and its result, since most metrics assess them jointly. Finally, interpretation refers to the user’s internal process of deriving meaning from the explanans.
Explanandum: What is to be explained, e.g., a model and its prediction.
Explanation: The process of explaining, i.e., the XAI algorithm.
Explanans: The explaining information, i.e., the output of the explanation.
Mathematical Notation
The notation used here aims to be readable rather than strictly formal. Most concepts are framed for classification models but can be adapted for regression. Notation may be slightly overloaded when context prevents ambiguity.
- and denote the input and output spaces.
- The model is a scoring function: .
- is shorthand for the score vector .
- The predicted label: .
- denotes an arbitrary second input, a perturbed version.
- is the set of all inputs with true label .
- An explanandum is the triple .
- An explanans is written as , or simply .
- For ExEs: , with .
- For WBSs: , prediction via .
- denotes a generic dissimilarity function.
- for cardinality; for -norms.
- is a generic parameter; is a small tolerance.
- is a class-specific autoencoder trained on .
Metric vs. Measure
We use the term metrics to refer to evaluation criteria for explainability. To distinguish them from standard scoring functions, we refer to the latter (e.g., accuracy or cosine similarity) as measures.

