Page MenuHomePhabricator

Strategy for how to change the "cu_changes" DB schema
Closed, ResolvedPublic

Description

I want to change DB schema for "cu_changes" table for implement Global CheckUser.
but I guess when merge the each clusters "cu_changes" to one cluster, single DB cluster IO is expensive.

WMF DBAs are how to handling that? horizontal split? or should be still holding splited write for the each DB clusters?

Strategy

  • Plan 1: Just simply add "cuc_wiki" field to "cu_changes" and merge the table to single cluster
  • Plan 2: Just simply add "cuc_wiki" field to "cu_changes" and Create master table "sN_cu_changes" and merge the each wiki's table to the master table. (I think this is same IO volume per each clusters); N is cluster number. e.g. s3_cu_changes
  • Plan3 : Holding current DB schemas. When issued a query, just SELECTing each DBs (I guess it is too slow for getting result)

Event Timeline

Rxy triaged this task as Medium priority.Jan 2 2019, 1:48 PM
Rxy created this task.
Rxy updated the task description. (Show Details)
Rxy updated the task description. (Show Details)

What do you mean with a "single DB cluster"? I don't know much about the cu_changes table, but as far as I know there is one table per wiki (where it is enabled)
If you just need to add a new field to the current table (on each wiki) you'd need to follow the schema change procedure: https://wikitech.wikimedia.org/wiki/Schema_changes#Workflow_of_a_schema_change
Please allow enough time for the schema change to happen (it can take up to months, depending on how big the table is).

What do you mean with a "single DB cluster"? I don't know much about the cu_changes table, but as far as I know there is one table per wiki (where it is enabled)
If you just need to add a new field to the current table (on each wiki) you'd need to follow the schema change procedure: https://wikitech.wikimedia.org/wiki/Schema_changes#Workflow_of_a_schema_change
Please allow enough time for the schema change to happen (it can take up to months, depending on how big the table is).

"Single cluster" meant central DB e.g. "centralauth.cu_changes" @s7 cluster

Thanks for clarifying that!
So one big table holding the data for all the wikis is probably not a good idea as you'd be creating just a big massive table.
I am also not sure about having one table per section as you mentioned: s3_cu_changes as sections can dynamically change, for example s3 gets new wikis by default, and those get created relatively often ie: T207095 T210762 T212625
We also move existing databases between sections to balance load ie: T184805

Probably the approach of selecting each table from each db is easier to scale and to manage (as nothing needs to be changed). What you might want to do is to optimize those queries to make them run as fast as possible. And maybe review the indexes in case you need new ones to satisfy your possibly, new queries?
Reminder: We don't mind (to a certain extend) if you make many queries, as long as they are fast. What can bring down databases are slow and complex queries :)

So one big table holding the data for all the wikis is probably not a good idea as you'd be creating just a big massive table.

Just FYI, these tables are purged every 90 days, so table size stays relatively constant. Currently the table on enwiki has ~12 million rows.

So one big table holding the data for all the wikis is probably not a good idea as you'd be creating just a big massive table.

Just FYI, these tables are purged every 90 days, so table size stays relatively constant. Currently the table on enwiki has ~12 million rows.

But they are still pretty big, these are the current sizes:

s1: 14GB
s2: 15GB
s3: Around 5GB
s4: 12GB
s5: 1.5GB
s6: 17GB
s7: 12GB
s8: 33GB

That is a total size of around 100GB for a single table (probably a bit less, as it is most likely fragmented) but you get what I mean :-)

Thanks for advice!.

I'll just implement with plan 3. then if we can't wait to show the result, I'll switch to Plan 1.
if we had trouble with Plan 1, I'll switch to Plan 2.

Rxy claimed this task.