Right now the co-master support and fanouts (dsh_masters, dsh_proxies) only work on scap but not deploy. This needs fixing obviously before we move MW over to deploy.
|T114313 [EPIC] Migrate the MW weekly train deploy to scap3
|T147938 Use git as transport mechanism for MediaWiki scap deploys
|T121276 Bring co-master / fanout capabilities to scap3 deployments
|T116630 Remove apache dependency from scap3 deployment host
|T116207 enforcing deployment from `/srv/deployment` is wrong
|T127733 [Spike] Benchmark built-in HTTP server options for scap3 fanout
The current thinking is to integrate something very similar to https://github.com/thcipriani/gpack into the scap repository.
Fanout nodes would then spin up gpack to serve traffic to further groups of nodes. Since this process is run by an unprivileged user, the wsgi server would have to spin up on a high port number that is accessible to the remaining nodes in the deploy group.
Would be nice to get some ops feedback on this idea since it would require a range of ports (to allow parallel fanout deploys of multiple repos) to be opened to internal machines.
Talked a bit with @fgiunchedi about this Monday, he asked us to open up the discussion on phab.
Why shouldn't we just use apache for this? It's easy to set up a specific virtual host on a high port for every repository which is configured via a proxy to serve static git files from a specific directory only to selected nodes.
What would the advantage be in using gpack instead?
following up here too, yesterday at the meeting I've also floated the idea of using a generic http service like swift to serve the git repos. As per T64835: Setup a Swift cluster on beta-cluster to match production there is a swift cluster in beta/deployment-prep now that can be used for experiments.