Page MenuHomePhabricator

Optimize recentchanges and wbc_entity_usage table across wikis
Closed, ResolvedPublic

Description

After enabling xkill (T172914: [Tracking] Fine-grained change notifications based on tracking from Lua getters via __index) and T185693: Implement a (more liberal) usage aspect deduplicater (days: 3), the size of the recentchanges table and wbc_entiy_usage table is changing drastically. Optimizing them will free lots of space. The most prominent changes will happen on: cawiki, ruwiki, hywiki, commonswiki, and then with less extent: bewiki, glwiki, frwiki, eswiki, jawiki, huwiki, kowiki, svwiki, nlwiki, fawiki, ptwiki

I would recommend doing the optimization of the wbc_entity_usage at least after the next week so changes propagate through caches and actually removes the rows. And optimizing the recentchanges table at least the next month (Let's say late March) so with the one-month timespan of recentchanges it gets to its minimum size unless there is storage issue and then we can do it twice. Thank you!

wbc_entity_usage defragmenting process (recentchanges was tracked at T178290):

  • bgwiki s2
  • itwiki s2
  • svwiki s2
  • zhwiki s2
  • bewiki s3
  • cewiki s3
  • dawiki s3
  • hywiki s3
  • ttwiki s3
  • frwiki s6
  • arwiki s7
  • cawiki s7
  • huwiki s7
  • rowiki s7
  • ukwiki s7

Event Timeline

Marostegui triaged this task as Medium priority.Feb 16 2018, 6:41 AM
Marostegui moved this task from Triage to Backlog on the DBA board.
Marostegui subscribed.

Thanks! Reducing disk size is always good news! :-)

@Ladsgroup @Marostegui I have a cron job on stat1004 that Sqoops the wbc_entity_usage tables for all projects into a HiveQL table for the Wikidata Concepts Monitor pre-processing. The cron job runs on a weekly schedule. Please let me know if you think it would be affected by whatever optimization you plan to do there. No serious problems would be caused if I would need to drop a weekly update or two if you predict any interactions. Thanks.

@Ladsgroup @Marostegui I have a cron job on stat1004 that Sqoops the wbc_entity_usage tables for all projects into a HiveQL table for the Wikidata Concepts Monitor pre-processing. The cron job runs on a weekly schedule. Please let me know if you think it would be affected by whatever optimization you plan to do there. No serious problems would be caused if I would need to drop a weekly update or two if you predict any interactions. Thanks.

There should not be any issues with those optimizations and your cronjob.
Just for knowing it, which day/time does it run?

Sqoops the wbc_entity_usage tables

Also, which db server is being used? analytics-replica/analytics store/dbstore1002 is ok to do that, others are not.

@Marostegui m = 0, h = 0, dom = 7,14,21,29, mon = *, dow = *, i.e. every 7th, 14th, 21st, and 29th of the month, 00:00 UTC.

@jcrespo The R script that orchestrates Apache Sqoop connects to analytics-store.eqiad.wmnet by using my analytics-research-client.cnf credentials from stat1004 - I wouldn't know exactly the server to which that resolves.

That's ok, it goes to dbstore1002 then. All good.

commonswiki errors due to deadlocks on INSERT IGNORE INTO wbc_entity_usage seem to be common (not too worrying, but on of the most comon database errors), could the code be optimized to avoid those? I am guessing that the same row is written many times (once per change on the same item), and maybe that could be simplified somehow. INSERT IGNORE is a bit of a bad trick here, and we may be writing multiple times the same data without need. Given the changes are done by the job queue and arrive in any order, maybe transaction serialization can be relaxed?

hmm, I can see why. Mostly jobqueue stuff. Can you file a phabrictor ticket so we can pick it up and do something about it?

Mentioned in SAL (#wikimedia-operations) [2018-07-09T05:41:25Z] <marostegui> Optimize bgwiki itwiki svwiki zhwiki wbc_entity_usage on s2 codfw master with replication - lag will happen on s2 codfw - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-09T06:03:46Z] <marostegui> Optimize bgwiki itwiki svwiki zhwiki wbc_entity_usage on dbstore1002:s2, db1122 and db1105 - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-09T08:31:07Z] <marostegui> Optimize bgwiki itwiki svwiki zhwiki wbc_entity_usage on db1074 with replication, this will generate lag on s2 on labshosts - T187521

Marostegui moved this task from Backlog to In progress on the DBA board.

Mentioned in SAL (#wikimedia-operations) [2018-07-10T04:36:43Z] <marostegui> Optimize bgwiki itwiki svwiki zhwiki wbc_entity_usage on db1066 (s2 primary master) - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-10T04:59:28Z] <marostegui> Optimize frwiki.wbc_entity_usage on s6 codfw, this will generate lag on s6 codfw - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-10T05:17:21Z] <marostegui> Optimize frwiki.wbc_entity_usage on s6 eqiad hosts T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-10T06:32:37Z] <marostegui> Optimize wbc_entity_usage on arwiki cawiki huwiki rowiki ukwiki on s7 codfw master (db2040) with replication, this will generate lag on s7 codfw - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-11T05:06:48Z] <marostegui> Optimize wbc_entity_usage on arwiki cawiki huwiki rowiki ukwiki on db1094 - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-11T05:23:34Z] <marostegui> Optimize wbc_entity_usage on arwiki cawiki huwiki rowiki ukwiki on db1090 - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-11T05:49:10Z] <marostegui> Optimize wbc_entity_usage on arwiki cawiki huwiki rowiki ukwiki on db1079 with replication, this will generate lag on s7 labs hosts - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-11T06:12:32Z] <marostegui> Optimize wbc_entity_usage on arwiki cawiki huwiki rowiki ukwiki on db1086 - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-11T06:30:30Z] <marostegui> Optimize wbc_entity_usage on arwiki cawiki huwiki rowiki ukwiki on db1062 (s7 primary master) - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-11T06:59:01Z] <marostegui> Optimize wbc_entity_usage on bewiki cewiki dawiki hywiki ttwiki on db2043 (s3 codfw master), this will generate lag on s3 codfw - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-11T07:51:29Z] <marostegui> Optimize wbc_entity_usage on bewiki cewiki dawiki hywiki ttwiki on dbstore1002:s3, db1078 - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-11T08:14:27Z] <marostegui> Optimize wbc_entity_usage on bewiki cewiki dawiki hywiki ttwiki on db1077 with replication, this will generate lag on s3 labs hosts - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-12T04:48:28Z] <marostegui> Optimize wbc_entity_usage on bewiki cewiki dawiki hywiki ttwiki on db1123 - T187521

Mentioned in SAL (#wikimedia-operations) [2018-07-12T05:01:23Z] <marostegui> Optimize wbc_entity_usage on bewiki cewiki dawiki hywiki ttwiki on db1075 (s3 primary master) - T187521

Marostegui updated the task description. (Show Details)

The lists of pending wikis have been done