As an ML engineer,
I want to be able to publish metrics from model servers, so I can create dashboards and alerts using these metrics so that I can easily have a clear overview of the models performance and observe if there are differences in how the model is performing (anomalous behavior, drift detection etc).
We are referring to metrics related to the models and system metrics. For example in a binary classification model a metric could represent the predicted class or the probability.
We need to define a set of specific metrics in order to follow the same conventions for all models/model servers. e.g. 'predicted_class'