Page MenuHomePhabricator

[Idea] Deploy MediaWiki to Wikimedia production with a dedicated repo rather than re-using MediaWiki core
Closed, DeclinedPublic

Description

  • Create a new repo, WikimediaProduction.git or whatever.
  • All app server repos are sub-repos of that (core, vendor, extensions x185, skins x8, config) pointed at master
  • (Current model)
    • Train branches are made of WikimediaProduction.git only.
    • Back-ported changes are made by updating the pointer of the necessary repo in the branch.
  • (Continuous deployment model)
    • Deploys are live from master (? after some back-off time).

Pros

  1. No wmf/ crap in MediaWiki/core.git, vendor.git, or the 192 other extension and skin repos.
  2. Very easy to map to continuous deployment model.
  3. Multi-repo changes can be atomic.
  4. Config is now versioned and so a bunch of forwards/backwards compatibility engineering can be avoided.

Cons

  1. Change.
  2. Harder to back-port small changes later in the week (can't point to a git sha1 that doesn't exist).
  3. Config is now versioned and has to be back-ported or wait for the train.

Event Timeline

hashar subscribed.

Does that still fit with now that we have MW-on-K8s ? We also are phasing out Gerrit for Gitlab which afaik doesn't have the concept of a super project automatically tracking projects via submodules.

Does that still fit with now that we have MW-on-K8s ? We also are phasing out Gerrit for Gitlab which afaik doesn't have the concept of a super project automatically tracking projects via submodules.

Indeed, this idea relied on gerrit functionality; the feature could be ported into GitLab, but that sounds hard, I don't know if upstream would take the patch, and if they didn't maintaining it as a local hack in our instance sounds messy.