Our goal is to create a proof-of-concept of what could be a more modern paradigm to run tools and/or services.
More in practice, on a correctly-configured python repository:
git push deploy
deploys the python-uwsgi app through a docker on marathon/mesos, available to wide world through a http proxy
- push deploy will just push to a repository under our control
- post receive hook that:
1. checks to determine type of push (currently just support python-uwsgi)
* A simple heroku-like manifest at the root of the project should help. The format we're using for toollabs currently should be good
2. build a new docker container image that's based off a base image, with addition of:
* code files in repo
* mysql config path
* other things as needed
* install things from virtualenv too, if necessary - based off requirements.txt
3. The mysql server and other common resources will be managed via env variables, that makes it easy to separate dev/staging/"prod"
4. deploy said container image to a local docker repo
5. deploy an instance of container through marathon
5. switch proxy to serve stuff through new container
6. kill old container
- The proxy can just be urlproxy from toollabs. Is quite flexible, we can use the same code / scheme!
To figure out:
- building docker images programmatically. Should we just shell out?
- Speed of building new image, especially if there are local dependencies tobe installed (virtualenv)