KServe provides the ability to attach an [[ https://kserve.github.io/website/0.9/modelserving/explainer/explainer/ | Explainer ]] to an Inference Service in order to provide an explanation for a prediction given by an ML model. The explanation can be invoked using the `:explain` endpoint.
[[ https://docs.seldon.io/projects/alibi/en/stable/index.html | Alibi Explain ]] is an open source Python library implementing various black-box, white-box, local and global explanation methods for classification and regression models.
A black-box model describes any model that the explainer method may not inspect and modify arbitrarily. The only interaction with the model is via calling its predict function (or similar) on data and receiving predictions back. (see [[ https://docs.seldon.io/projects/alibi/en/stable/overview/white_box_black_box.html | White-box and black-box models ]]) However, black-box models **must** be compatible with batch prediction. I.e. alibi explainers assume that the first dimension of the input array is always batch.
KServe docs provide examples of Alibi Explainer with Anchor algorithm. [[ https://docs.seldon.io/projects/alibi/en/stable/methods/Anchors.html | Anchor ]] algorithm is a model-agnostic (black box) and human interpretable explanations model suitable for classification models applied to images, text and tabular data. The Anchor algorithm will query the black-box model in batches of size `batch_size`. A larger `batch_size` gives more confidence in the anchor, again at the expense of computation time since it involves more model prediction calls.
Kserve code: https://github.com/kserve/kserve/blob/master/python/alibiexplainer/alibiexplainer/explainer.py