References:
Lakkaraju et al. (2016), Lakkaraju et al. (2017), Lakkaraju et al. (2019), Ribeiro et al. (2018), Rawal and Lakkaraju (2020), Warnecke et al. (2020), Moradi and Samwald (2021), Huang et al. (2023b)
Toggle Text Reference
An explanation method should ideally be capable of generating a valid explanans for every input instance, regardless of its position on the data manifold. To assess this, we calculate the fraction of inputs for which the method provides a valid result.
The notion of “valid result” is intentionally broad and depends on the explanation type and the use case. A few prominent definitions include:
• For perturbation-based FAs like LIME (see [Ribeiro et al. (2016)]), a valid explanans can be given if a sufficient number of perturbed samples belong to the target or opposite class, enabling a reliable local approximation [Warnecke et al. (2020)].
• For rule-based or tree-based WBSs, coverage is the number of instances that are captured by at least one rule or decision path [Lakkaraju et al. (2016), Lakkaraju et al. (2017), Lakkaraju et al. (2019), Ribeiro et al. (2018), Rawal and Lakkaraju (2020)], optionally restricted to rules associated with the correct class [Moradi and Samwald (2021)].
• For global ExEs, coverage may be defined as the number of inputs that have at least one sufficiently similar explaining instance within a pre-defined distance [Huang et al. (2023b)].
The notion of “valid result” is intentionally broad and depends on the explanation type and the use case. A few prominent definitions include:
• For perturbation-based FAs like LIME (see [Ribeiro et al. (2016)]), a valid explanans can be given if a sufficient number of perturbed samples belong to the target or opposite class, enabling a reliable local approximation [Warnecke et al. (2020)].
• For rule-based or tree-based WBSs, coverage is the number of instances that are captured by at least one rule or decision path [Lakkaraju et al. (2016), Lakkaraju et al. (2017), Lakkaraju et al. (2019), Ribeiro et al. (2018), Rawal and Lakkaraju (2020)], optionally restricted to rules associated with the correct class [Moradi and Samwald (2021)].
• For global ExEs, coverage may be defined as the number of inputs that have at least one sufficiently similar explaining instance within a pre-defined distance [Huang et al. (2023b)].

