Page MenuHomePhabricator

setup automatic deletion of old l10nupdate
Closed, ResolvedPublic

Description

14:48 < bd808> man the root partition on tin is still pretty low on disk space.


14:49 < bd808> apparently /var/lib/l10nupdate
14:50 < bd808> is there no job cleaning up the old branches caches there?
14:50 < bd808> there's no need to keep them around as long as the actual deployed branches

14:51 < mutante> this comes up about once per deploy lately. yesterday it has been said
                 that they get deleted but after 50 days afair

14:52 < bd808> !log deleted /var/lib/l10nupdate/caches/cache-1.27.0-wmf.1[345] on tin. Freed ~4G of disk

14:53 < bd808> mutante: the deployed branches live that long, yes. I wonder if we missed
               an un-puppeted clean up script for l10nupdate in the rebuild though
14:53 < bd808> once a branch rolls off of prod there is no need to have the l10nupdate cache around anymore.
14:54 < bd808> it is only needed while a version is active and will get nightly l10nudpate patches

14:55 < mutante> bd808: that might as well be the case about that script.
                i remember deleting some of them manually in the distant past but that's about it
14:55 < bd808> it seems like the sort of thing somebody may have hacked up and forgot to ever put in puppet

Event Timeline

Could we just purge those when we purge the usual localisation caches in /srv/mediawiki-staging ?

Reedy, how does that (purging usual localisation caches) currently get triggered?

Reedy, how does that (purging usual localisation caches) currently get triggered?

Manually, so just add it to the same script. The time we're removing/purging localisation stuff from old branches in staging, is a reasonable time to remove these localisation caches from the translation update directories

notes from looking into this with @Reedy on irc:

  • scap doesn't seem to know about /var/lib/l10nupdate but instead it drops cdb files in os.path.join(cfg['stage_dir'], 'php-%s' % version, 'cache', 'l10n')
  • /var/lib/l10nupdate is l10nupdate-1 domain's and it runs daily
  • ideally all of l10nupdate is ported to scap, and l10nupdate isn't run daily
ori raised the priority of this task from Medium to High.May 4 2016, 9:58 AM
ori added subscribers: mmodell, ori.

@mmodell, blocking this on porting l10nupdate to scap doesn't seem reasonable. Could you simply make pruning old l10n caches a formal step in the process you follow to cut new branches?

@mmodell, blocking this on porting l10nupdate to scap doesn't seem reasonable. Could you simply make pruning old l10n caches a formal step in the process you follow to cut new branches?

That should be done until the automatic part is completed, yes (but this task is about the "automatic" part :) ).

@mmodell: can you add the necessary lines to the documentation per Ori's comment?

@greg: sure
@ori: sorry I missed this before.

As for the blocker, I don't mind if we automate it some other way, I just wanted to note that porting it to scap is on the radar and may not be too far off.

Mentioned in SAL [2016-07-08T20:17:06Z] <bd808> Deleted old l10nupdate caches manually on tin (T130317)

fgiunchedi lowered the priority of this task from High to Medium.Dec 1 2016, 7:47 PM

ATM there's 5 mediawiki versions on /var/lib/l10nupdate/caches so I suspect something/someone is cleaning up, not sure what though

demon claimed this task.
demon added a subscriber: demon.

This is fixed via T119747 (a dupe really)