In T280467 we are discussing how the Swift buckets should look like, but we should also establish a sound process to upload models when needed.
As starting point, we could do something like the following:
- Complete the work in https://gerrit.wikimedia.org/r/c/machinelearning/liftwing/inference-services/+/719668
- Add a puppet profile to the stat100x's role for machine learning, that takes care of deploying the inference-services repo (keeping it updated) and that deploys our Swift credentials with proper permissions (maybe something like /etc/s3cmd/config.d/ml-team.cfg only readable by users in the deploy-ml-service group).
- Modify the script mentioned in the first point (if not already done) to use different config paths.
In this way the typical workflow for our team would be:
- ssh to stat100x
- work on a model
- use model_upload.sh with our credentials to push the model to Swift