Page MenuHomePhabricator

Update change tag indexes
Closed, ResolvedPublic

Description

It seems we have a $wgOldChangeTagsIndex global to select between old and new indexes.

Based on our current site config, few wikis have it set to false for them, but this seems to be rather out of date as I found dberrors from outreachwiki in relation to it. I suspect most "newer" wikis would need this set to false

'wgOldChangeTagsIndex' => array(
'default' => true, // For compat with index on changetags table

'flaggedrevs_labswikimedia' => false, -Aaron 4-21-2010
'fawikinews' => false,
Strangely, this one has the new index names --catrope Oct 27 2010
'outreachwiki' => false,
),

It'd be nice if we could get production cleaned up, and then remove this damn global.

maintenance/archives/patch-change_tag-indexes.sql is the patch to run


Version: unspecified
Severity: normal

Details

Reference
bz40867

Event Timeline

bzimport raised the priority of this task from to Low.Nov 22 2014, 1:06 AM
bzimport set Reference to bz40867.
Reedy created this task.Oct 8 2012, 7:21 PM
Reedy added a comment.Oct 8 2012, 7:43 PM

I guess the patch should be updated so it does things in less separate actions...

ALTER TABLE /*_*/change_tag
DROP INDEX ct_rc_id,
DROP INDEX ct_log_id,
DROP INDEX ct_rev_id,
DROP INDEX ct_tag,
ADD INDEX /*i*/change_tag_rc_tag(ct_rc_id,ct_tag),
ADD UNIQUE INDEX /*i*/change_tag_log_tag (ct_log_id,ct_tag),
ADD UNIQUE INDEX /*i*/change_tag_rev_tag (ct_rev_id,ct_tag),
ADD INDEX /*i*/change_tag_tag_id (ct_tag,ct_rc_id,ct_rev_id,ct_log_id);

ALTER TABLE /*_*/tag_summary
DROP INDEX ts_rc_id,
DROP INDEX ts_log_id,
DROP INDEX ts_rev_id,
ADD UNIQUE INDEX /*i*/tag_summary_rc_id (ts_rc_id),
ADD UNIQUE INDEX /*i*/tag_summary_log_id (ts_log_id),
ADD UNIQUE INDEX /*i*/tag_summary_rev_id (ts_rev_id);

Reedy added a comment.Oct 8 2012, 8:03 PM

Every wiki bar the list below need updating

acewiki
arbcom_fiwiki
arwikimedia
arwikiversity
bdwikimedia
bewikimedia
bewikisource
bjnwiki
boardgovcomwiki
brwikimedia
brwikisource
checkuserwiki
ckbwiki
cowikimedia
dkwikimedia
donatewiki
elwikinews
eowikinews
eowikisource
etwikimedia
fawikinews
fiwikimedia
fiwikiversity
flaggedrevs_labswikimedia
frrwiki
gagwiki
guwikisource
kbdwiki
koiwiki
kowikinews
krcwiki
lezwiki
liquidthreads_labswikimedia
liwikibooks
ltgwiki
mhrwiki
mkwikimedia
movementroleswiki
mrjwiki
mrwikisource
mwlwiki
mxwikimedia
noboard_chapterswikimedia
nsowiki
nycwikimedia
outreachwiki
pcdwiki
pflwiki
pnbwiki
pnbwiktionary
pntwiki
readerfeedback_labswikimedia
ruewiki
ruwikimedia
ruwikiversity
sahwikisource
sawikisource
slwikiversity
sqwikinews
stewardwiki
strategywiki
svwikiversity
tenwiki
test2wiki
trwikimedia
trwikinews
uawikimedia
usabilitywiki
vecwikisource
vepwiki
vewikimedia
wikimania2010wiki
wikimania2011wiki
wikimania2012wiki
wikimania2013wiki
xmfwiki

afeldman wrote:

Is there any config/deploy coordination needed while running the individual schema migrations?

Reedy added a comment.Nov 3 2012, 3:13 AM

(In reply to comment #3)

Is there any config/deploy coordination needed while running the individual
schema migrations?

Maybe.. The index usage is not very common, mostly via the api.

It'd probably suffice to change them over as and when (ie in InitialiseSettings), though, doing for upto 800 times isn't really sensible..

The problems can't have been too widely spread, as I noticed and fixed this after seeing the errors in the db log. We could just do it and ignore the errors for the migration period

afeldman wrote:

I'm running all of these today. I'll change wgOldChangeTagsIndex['default'] to false when done.

afeldman wrote:

Updated all but the following 30 wikis. They have larger change_tag tables, and the scheme change isn't compatible with pt-online-schema-change, including the latest version. I've tried leaving one of the original unique indexes but still no dice so far. I've updated InitialiseSettings to change the default to false, but true for last 30.

+ 'arwiki' => true,
+ 'commonswiki' => true,
+ 'cswiki' => true,
+ 'dewiki' => true,
+ 'elwiki' => true,
+ 'enwiki' => true,
+ 'enwikisource' => true,
+ 'enwiktionary' => true,
+ 'eswiki' => true,
+ 'etwiki' => true,
+ 'fawiki' => true,
+ 'fiwiki' => true,
+ 'frwiki' => true,
+ 'hewiki' => true,
+ 'huwiki' => true,
+ 'idwiki' => true,
+ 'itwiki' => true,
+ 'jawiki' => true,
+ 'ltwiki' => true,
+ 'mrwiki' => true,
+ 'nlwiki' => true,
+ 'plwiki' => true,
+ 'ptwiki' => true,
+ 'rowiki' => true,
+ 'ruwiki' => true,
+ 'simplewiki' => true,
+ 'svwiki' => true,
+ 'trwiki' => true,
+ 'ukwiki' => true,
+ 'zhwiki' => true,

Change 73353 had a related patch set uploaded by Springle:
simplewiki change_tags indexes updated, bug 40867

https://gerrit.wikimedia.org/r/73353

Change 73353 merged by Springle:
simplewiki change_tags indexes updated, bug 40867

https://gerrit.wikimedia.org/r/73353

Change 73435 had a related patch set uploaded by Springle:
default to newly added change_tags/tag_summary indexes on all remaining wikis in bug 40867 comment 6

https://gerrit.wikimedia.org/r/73435

Change 73435 merged by jenkins-bot:
Remove $wgOldChangeTagsIndex on all remaining wikis in bug 40867 comment 6

https://gerrit.wikimedia.org/r/73435

Present situation:

Both old and new indexes exist on change_tag and tag_summary on the remaining 30 wikis listed in comment 6. Those wikis are configured to use the new indexes.

Holding off on dropping the old indexes until I've had a chance to do a full analysis of bug 51254 and find a work around.

Krenair added a subscriber: Krenair.
Restricted Application added a subscriber: Aklapper. · View Herald TranscriptSep 2 2015, 4:50 PM
Krinkle moved this task from Triage to Backlog on the DBA board.Sep 23 2015, 4:27 AM
Krinkle moved this task from Backlog to Triage on the DBA board.Sep 23 2015, 7:09 AM
MZMcBride removed Springle as the assignee of this task.Nov 18 2015, 6:40 PM
MZMcBride set Security to None.
MZMcBride added a subscriber: MZMcBride.

@jcrespo is this task still relevant today? I know you and @Marostegui have been doing Herculean amounts of cleanup on this type of thing...

@TTO I cannot say for sure- only researching it requires quering 20,000 databases to check the current state or implement T104459.

Marostegui closed this task as Resolved.Oct 9 2017, 12:03 PM

This is all done.
I have checked all the shards and they look consistent. Obviously s3 could have had some stuff slipped, but in general looks good.