As part of the push to deploy project, once a git server has triggered a webhook, we need something to invoke `pack`, build the image and then push it to the registry. Then it would schedule a new deployment in k8s (equivalent to what `webservice` would normally do).
Projects under consideration:
* https://argoproj.github.io/argo-cd/
* GitLab: https://docs.gitlab.com/ce/user/project/clusters/ (whether in production, or a Toolforge/Cloud Services hosted one)
Inputs the CD system will get:
* Git repo URL and commit
* Tool name and image name and deployment name
Actions the CD system will do:
* Clone the git repo and checkout the correct commit
* Some sanity check that a `service.template` exists, is proper YAML...
* Get the stack name from the config
* Run `pack build {image_name} --builder docker-registry.tools.wmflabs.org/toolforge-{stack}-builder:latest --publish`
** This step will need access to the docker socket, effectively running pack as root.
** This step also needs push access to the docker repo
* Create a new k8s deployment if it doesn't exist yet OR delete the existing pods so the new image is pulled when it restarts
** This step needs k8s access
** Sidenote: we'll need some independent mechanism to stop a buildpack web server, maybe we reuse `webservice stop`?
We'll probably want to have some kind of garbage collection on the CD hosts to delete old images and volumes every so often (but not immediately to take advantage of caching). Given that pack conveniently timestamps everything to 40 years ago to keep images reproducible, I don't know if there's an easy way to figure out how old an image is...we might just want to initially delete everything.
Users should be able to:
* View current build progress and status
* Look at failure logs (well and successful ones too)
* Retrigger jobs that failed for flakiness reasons aka "recheck" (I suppose this is optional but really nice to have)