KServe provides the ability to attach an Explainer to an Inference Service in order to provide an explanation for a prediction given by an ML model. The explanation can be invoked using the :explain endpoint.
Many of our models at WMF are tree-based, which gives us the advantage of being able to compute feature importance in a fairly straight-forward manner. LIME (Local Interpretable Model-Agnostic Explanations) is an algorithm that can help us do this with tabular data.
LIME: https://arxiv.org/abs/1602.04938
Code: https://github.com/marcotcr/lime
There has been some prior explainability work done for some of our models in the past (see: T196475):
https://github.com/adamwight/ores-lime
Let's try integrating it into a serverless explainer and attach it to an Inference Service.