Page MenuHomePhabricator

Apply AbuseFilter patch-fix-index
Closed, ResolvedPublic

Description

From a quick analysis on T179131, it seems that at least some of the AbuseFilter indexes are incorrect in production (at least on enwiki)

enwiki has

mysql:wikiadmin@db1066 [enwiki]> show indexes from abuse_filter_log;
+------------------+------------+----------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table            | Non_unique | Key_name       | Seq_in_index | Column_name   | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+------------------+------------+----------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| abuse_filter_log |          0 | PRIMARY        |            1 | afl_id        | A         |    19022925 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_timestamp  |            1 | afl_timestamp | A         |    19022925 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_namespace  |            1 | afl_namespace | A         |        4264 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_namespace  |            2 | afl_title     | A         |     6340975 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_ip         |            1 | afl_ip        | A         |     2377865 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_rev_id     |            1 | afl_rev_id    | A         |     9511462 |     NULL | NULL   | YES  | BTREE      |         |               |
| abuse_filter_log |          1 | afl_log_id     |            1 | afl_log_id    | A         |           2 |     NULL | NULL   | YES  | BTREE      |         |               |
| abuse_filter_log |          1 | user_timestamp |            1 | afl_user      | A         |     6340975 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | user_timestamp |            2 | afl_user_text | A         |    19022925 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | user_timestamp |            3 | afl_timestamp | A         |    19022925 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_filter     |            1 | afl_filter    | A         |       56615 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_filter     |            2 | afl_timestamp | A         |    19022925 |     NULL | NULL   |      | BTREE      |         |               |
+------------------+------------+----------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
12 rows in set (0.00 sec)

It should have had the changes below applied

https://github.com/wikimedia/mediawiki-extensions-AbuseFilter/blob/master/db_patches/patch-fix-indexes.sql

ALTER TABLE /*_*/abuse_filter_log ADD KEY filter_timestamp (afl_filter,afl_timestamp);
ALTER TABLE /*_*/abuse_filter_log ADD KEY user_timestamp (afl_user,afl_user_text,afl_timestamp);
ALTER TABLE /*_*/abuse_filter_log ADD KEY page_timestamp (afl_namespace, afl_title, afl_timestamp);
ALTER TABLE /*_*/abuse_filter_log ADD KEY ip_timestamp (afl_ip, afl_timestamp);
ALTER TABLE /*_*/abuse_filter_log ADD KEY wiki_timestamp (afl_wiki, afl_timestamp);

ALTER TABLE /*_*/abuse_filter_log DROP KEY afl_filter;
ALTER TABLE /*_*/abuse_filter_log DROP KEY afl_user;
ALTER TABLE /*_*/abuse_filter_log DROP KEY afl_namespace;
ALTER TABLE /*_*/abuse_filter_log DROP KEY afl_ip;

Meaning the resultant table should look like

CREATE TABLE /*$wgDBprefix*/abuse_filter_log (
	afl_id BIGINT unsigned NOT NULL AUTO_INCREMENT,
	afl_filter varchar(64) binary NOT NULL,
	afl_user BIGINT unsigned NOT NULL,
	afl_user_text varchar(255) binary NOT NULL,
	afl_ip varchar(255) not null,
	afl_action varbinary(255) not null,
	afl_actions varbinary(255) not null,
	afl_var_dump BLOB NOT NULL,
	afl_timestamp binary(14) NOT NULL,
	afl_namespace int NOT NULL,
	afl_title varchar(255) binary NOT NULL,
	afl_wiki varchar(64) binary NULL,
	afl_deleted tinyint(1) NOT NULL DEFAULT 0,
	afl_patrolled_by int unsigned NULL,
	afl_rev_id int unsigned,
	afl_log_id int unsigned,

	PRIMARY KEY (afl_id),
	KEY filter_timestamp (afl_filter,afl_timestamp),
	KEY user_timestamp (afl_user,afl_user_text,afl_timestamp),
	KEY (afl_timestamp),
	KEY page_timestamp (afl_namespace, afl_title, afl_timestamp),
	KEY ip_timestamp (afl_ip, afl_timestamp),
	KEY (afl_rev_id),
	KEY (afl_log_id),
	KEY wiki_timestamp (afl_wiki, afl_timestamp)
) /*$wgDBTableOptions*/;

ALTER table to run:

ALTER TABLE /*_*/abuse_filter_log ADD KEY IF NOT EXISTS filter_timestamp (afl_filter,afl_timestamp), ADD KEY IF NOT EXISTS user_timestamp (afl_user,afl_user_text,afl_timestamp), ADD KEY IF NOT EXISTS page_timestamp (afl_namespace, afl_title, afl_timestamp), ADD KEY IF NOT EXISTS ip_timestamp (afl_ip, afl_timestamp), ADD KEY IF NOT EXISTS wiki_timestamp (afl_wiki, afl_timestamp),DROP KEY IF EXISTS afl_filter, DROP KEY IF EXISTS afl_user, DROP KEY IF EXISTS afl_namespace, DROP KEY IF EXISTS afl_ip;

Execution progress:

s1

s2

s3

s4

s5

s6

s7

s8
wikidatawiki already has the right schema

wikitech alread has the right schema.

Related Objects

StatusSubtypeAssignedTask
Resolved Marostegui
ResolvedLadsgroup
ResolvedNone
Resolved Marostegui
Resolved Marostegui
Resolved Marostegui
Resolved Marostegui
Resolved Marostegui
Resolved Marostegui
ResolvedReedy
Resolved Marostegui
Resolved Marostegui
Resolved Marostegui
DeclinedNone
Resolved Marostegui
Resolved Marostegui
ResolvedLadsgroup
Resolved Marostegui
Resolved Marostegui
Resolved Marostegui
Resolved Marostegui
Resolved Marostegui
Resolved Marostegui
Resolved Marostegui
ResolvedLadsgroup
Resolved Marostegui
Resolved Marostegui
ResolvedLadsgroup
ResolvedKormat
Resolved Marostegui
ResolvedKormat
Resolved Marostegui
ResolvedKormat
Resolved Marostegui

Event Timeline

It gets more fun...

db1066 and db1083 have different indexes!

mysql:wikiadmin@db1083 [enwiki]> show indexes from abuse_filter_log;
+------------------+------------+----------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table            | Non_unique | Key_name       | Seq_in_index | Column_name   | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+------------------+------------+----------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| abuse_filter_log |          0 | PRIMARY        |            1 | afl_id        | A         |    19386588 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_filter     |            1 | afl_filter    | A         |       36997 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_timestamp  |            1 | afl_timestamp | A         |    19386588 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_namespace  |            1 | afl_namespace | A         |        4156 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_namespace  |            2 | afl_title     | A         |     9693294 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_ip         |            1 | afl_ip        | A         |     1615549 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_rev_id     |            1 | afl_rev_id    | A         |    19386588 |     NULL | NULL   | YES  | BTREE      |         |               |
| abuse_filter_log |          1 | afl_log_id     |            1 | afl_log_id    | A         |           2 |     NULL | NULL   | YES  | BTREE      |         |               |
| abuse_filter_log |          1 | user_timestamp |            1 | afl_user      | A         |     4846647 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | user_timestamp |            2 | afl_user_text | A         |    19386588 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | user_timestamp |            3 | afl_timestamp | A         |    19386588 |     NULL | NULL   |      | BTREE      |         |               |
+------------------+------------+----------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
11 rows in set (0.00 sec)

And dewiki has even less... Same as frwiki and mediawikiwiki (at least)...

mysql:wikiadmin@db1082 [dewiki]> show indexes from abuse_filter_log;
+------------------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table            | Non_unique | Key_name      | Seq_in_index | Column_name   | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+------------------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| abuse_filter_log |          0 | PRIMARY       |            1 | afl_id        | A         |      526522 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_filter    |            1 | afl_filter    | A         |         472 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_user      |            1 | afl_user      | A         |      105304 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_timestamp |            1 | afl_timestamp | A         |      526522 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_namespace |            1 | afl_namespace | A         |          38 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_namespace |            2 | afl_title     | A         |      526522 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_ip        |            1 | afl_ip        | A         |       19500 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_rev_id    |            1 | afl_rev_id    | A         |      131630 |     NULL | NULL   | YES  | BTREE      |         |               |
| abuse_filter_log |          1 | afl_log_id    |            1 | afl_log_id    | A         |           2 |     NULL | NULL   | YES  | BTREE      |         |               |
+------------------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
9 rows in set (0.00 sec)

Much newer wikis look to be correct

mysql:wikiadmin@db1077 [wikimania2017wiki]> show indexes from abuse_filter_log;
+------------------+------------+------------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table            | Non_unique | Key_name         | Seq_in_index | Column_name   | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+------------------+------------+------------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| abuse_filter_log |          0 | PRIMARY          |            1 | afl_id        | A         |          39 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | filter_timestamp |            1 | afl_filter    | A         |          13 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | filter_timestamp |            2 | afl_timestamp | A         |          39 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | user_timestamp   |            1 | afl_user      | A         |          39 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | user_timestamp   |            2 | afl_user_text | A         |          39 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | user_timestamp   |            3 | afl_timestamp | A         |          39 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_timestamp    |            1 | afl_timestamp | A         |          39 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | page_timestamp   |            1 | afl_namespace | A         |           9 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | page_timestamp   |            2 | afl_title     | A         |          39 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | page_timestamp   |            3 | afl_timestamp | A         |          39 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | ip_timestamp     |            1 | afl_ip        | A         |           2 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | ip_timestamp     |            2 | afl_timestamp | A         |          39 |     NULL | NULL   |      | BTREE      |         |               |
| abuse_filter_log |          1 | afl_rev_id       |            1 | afl_rev_id    | A         |          13 |     NULL | NULL   | YES  | BTREE      |         |               |
| abuse_filter_log |          1 | afl_log_id       |            1 | afl_log_id    | A         |           2 |     NULL | NULL   | YES  | BTREE      |         |               |
| abuse_filter_log |          1 | wiki_timestamp   |            1 | afl_wiki      | A         |           2 |     NULL | NULL   | YES  | BTREE      |         |               |
| abuse_filter_log |          1 | wiki_timestamp   |            2 | afl_timestamp | A         |          39 |     NULL | NULL   |      | BTREE      |         |               |
+------------------+------------+------------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
16 rows in set (0.00 sec)

mysql:wikiadmin@db1077 [wikimania2017wiki]>

Can a DBA do some sort of schema diff across the cluster for this tables indexes? To find out what state we're in more globally? Thanks!

@Reedy Can you apply that patch on Beta Cluster? Thanks.

@Reedy Can you apply that patch on Beta Cluster? Thanks.

No, there's no need. Pretty sure all the wikis were created after that was committed.. And if they weren't it's in the DB updaters, so they should've been applied anyway

Ok. Thanks for the explanation :)

jcrespo triaged this task as Medium priority.Jun 6 2018, 5:56 PM
jcrespo moved this task from Backlog to Pending comment on the DBA board.

As this drift has already created some issues I will try to work on this as a background task, trying to fix hosts slowly but steady.
Now that we can use ADD KEY IF NOT EXISTS and DROP KEY IF EXISTS it will be slightly easier, however from a first glance there are lots of drifts even between hosts on the same section.

@Daimona I have done a quick grep on mediawiki-extensions-AbuseFilter and on mediawiki-core repo to make sure there are no FORCE INDEX on any of the following ones:

afl_filter
afl_user
afl_namespace
afl_ip

It'd be great if you can double check it for me, as you probably know better where it is more likely to find those (if there's any)

@Marostegui I checked as well and there seems to be no FORCE INDEX at all. BTW should you need anything else I can reach out to you on IRC or wherever you want.

I have ran the ALTER on codfw DC for s5 section:

cebwiki
dewiki
enwikivoyage
mgwiktionary
shwiki
srwiki

Next week I am going to start with eqiad, I will go slow as we are touching many indexes, I will go slow to make sure we are not introducing any sort of regression on queries.

s5 eqiad progress

  • labsdb1011
  • labsdb1010
  • labsdb1009
  • dbstore1003
  • dbstore1002
  • db1124
  • db1113
  • db1110
  • db1102
  • db1100
  • db1097
  • db1096
  • db1082
  • db1070

@Daimona I have dropped+created the indexes on a bunch of hosts on s5, can you check from your side that everything looks good (querytime-wise) on the following wikis?

cebwiki
dewiki
enwikivoyage
mgwiktionary
shwiki
srwiki

I just want to make sure that we are not introducing any regressions by removing some of the indexes (and adding some).

@Marostegui Thanks! I tried several different queries on dewiki keeping an eye on Tendril, and nothing suspicious showed up, so I think you can safely go on. I'll continue monitoring Tendril.
BTW, IIRC the only wiki where I constantly see slow AbuseFilter-related queries is enwiki, which has the old indexes. Also, please note that many AF-related slow queries are actually bad queries on the recentchanges table (T214191), so not really related to these indexes.

Thanks @Daimona!
Is this an example of a slow query?

SELECT /* IndexPager::buildQueryInfo (AbuseLogPager) xxx */ * FROM `abuse_filter_log` LEFT JOIN `abuse_filter` ON ((af_id=afl_filter)) WHERE afl_filter = '550' AND ((afl_deleted = '0') OR (afl_deleted IS NULL)) ORDER BY afl_timestamp LIMIT 51 /*

s6 eqiad progress

  • labsdb1011
  • labsdb1010
  • labsdb1009
  • dbstore1005
  • dbstore1002
  • dbstore1001
  • db1125
  • db1113
  • db1098
  • db1096
  • db1093
  • db1088
  • db1085
  • db1061

Mentioned in SAL (#wikimedia-operations) [2019-02-25T08:52:03Z] <marostegui> Deploy schema change on s2 on codfw master - lag will happen on s2 codfw - T187295

Thanks @Daimona!
Is this an example of a slow query?

SELECT /* IndexPager::buildQueryInfo (AbuseLogPager) xxx */ * FROM `abuse_filter_log` LEFT JOIN `abuse_filter` ON ((af_id=afl_filter)) WHERE afl_filter = '550' AND ((afl_deleted = '0') OR (afl_deleted IS NULL)) ORDER BY afl_timestamp LIMIT 51 /*

Uhm, has it been reported as slow? I couldn't spot it, if so. Yes, this query being slow could be a side-effect of changing indexes.

Thanks @Daimona!
Is this an example of a slow query?

SELECT /* IndexPager::buildQueryInfo (AbuseLogPager) xxx */ * FROM `abuse_filter_log` LEFT JOIN `abuse_filter` ON ((af_id=afl_filter)) WHERE afl_filter = '550' AND ((afl_deleted = '0') OR (afl_deleted IS NULL)) ORDER BY afl_timestamp LIMIT 51 /*

Uhm, has it been reported as slow? I couldn't spot it, if so. Yes, this query being slow could be a side-effect of changing indexes.

No, that query was being reported on enwiki, which hasn't been touched yet.
Can you give one of the slow ones you've seen?

Mentioned in SAL (#wikimedia-operations) [2019-02-25T09:54:49Z] <marostegui> Deploy schema change on db1074, this will generate lag on labsdb:s2 - T187295

s2 eqiad progress

  • labsdb1011
  • labsdb1010
  • labsdb1009
  • dbstore1004
  • dbstore1002
  • db1125
  • db1122
  • db1105
  • db1103
  • db1095
  • db1090
  • db1076
  • db1074
  • db1066

@Daimona the following wikis are now fully altered, I am going to give it some hours before continuing and will monitor tendril:
s2:
bgwiki
bgwiktionary
cswiki
enwikiquote
enwiktionary
eowiki
fiwiki
idwiki
itwiki
nlwiki
nowiki
plwiki
ptwiki
svwiki
thwiki
trwiki
zhwiki

s5:
cebwiki
dewiki
enwikivoyage
mgwiktionary
shwiki
srwiki

s6:
frwiki
jawiki
ruwiki

Thanks @Daimona!
Is this an example of a slow query?

SELECT /* IndexPager::buildQueryInfo (AbuseLogPager) xxx */ * FROM `abuse_filter_log` LEFT JOIN `abuse_filter` ON ((af_id=afl_filter)) WHERE afl_filter = '550' AND ((afl_deleted = '0') OR (afl_deleted IS NULL)) ORDER BY afl_timestamp LIMIT 51 /*

Uhm, has it been reported as slow? I couldn't spot it, if so. Yes, this query being slow could be a side-effect of changing indexes.

No, that query was being reported on enwiki, which hasn't been touched yet.
Can you give one of the slow ones you've seen?

Ah yes, IIRC that query is one of the reported slow queries happening due to the lack of the new indexes. I remember asking about it in #wikimedia-databases a few days ago. Thanks for your work, I'll keep an eye on Tendril as well.

Nothing weird seems to have happened during the last 6 hours, and this is good. However, I guess that any possible trouble won't be visible before reaching enwiki...

Nothing weird seems to have happened during the last 6 hours, and this is good. However, I guess that any possible trouble won't be visible before reaching enwiki...

Yeah, my plan was to go to 1 enwiki slave next and executing the above queries (and others) manually before and after the index changes.

Great! Let me know if I can help, e.g. by playing around with AF on enwiki.

Yeah, I will ask for that once we have more replicas altered, otherwise you might sometimes reach one that is altered and another time one that is not :)
Thank you!

Change 492951 had a related patch set uploaded (by Marostegui; owner: Marostegui):
[operations/mediawiki-config@master] db-eqiad.php: Depool db1083

https://gerrit.wikimedia.org/r/492951

Change 492951 merged by jenkins-bot:
[operations/mediawiki-config@master] db-eqiad.php: Depool db1083

https://gerrit.wikimedia.org/r/492951

Mentioned in SAL (#wikimedia-operations) [2019-02-26T06:17:19Z] <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1083 T187295 (duration: 00m 51s)

Mentioned in SAL (#wikimedia-operations) [2019-02-26T06:17:38Z] <marostegui> Change abuse_filter_log indexes on db1083 - T187295

I used the following query on db1083 to measure the impact of the index change (I executed the query twice to make sure it was "warm"):

SELECT /* IndexPager::buildQueryInfo (AbuseLogPager) xx */ * FROM `abuse_filter_log` LEFT JOIN `abuse_filter` ON ((af_id=afl_filter)) WHERE afl_filter = '423' AND ((afl_deleted = '0') OR (afl_deleted IS NULL)) ORDER BY afl_timestamp DESC LIMIT 51;

51 rows in set (36.97 sec)

This is the explain:

root@db1083.eqiad.wmnet[(none)]> show explain for 1313887033;
+------+-------------+------------------+--------+---------------+---------------+---------+------------------------------------+----------+-----------------------------+
| id   | select_type | table            | type   | possible_keys | key           | key_len | ref                                | rows     | Extra                       |
+------+-------------+------------------+--------+---------------+---------------+---------+------------------------------------+----------+-----------------------------+
|    1 | SIMPLE      | abuse_filter_log | index  | afl_filter    | afl_timestamp | 16      | const                              | 22661459 | Using where; Using filesort |
|    1 | SIMPLE      | abuse_filter     | eq_ref | PRIMARY       | PRIMARY       | 8       | enwiki.abuse_filter_log.afl_filter |        1 | Using where                 |
+------+-------------+------------------+--------+---------------+---------------+---------+------------------------------------+----------+-----------------------------+
2 rows in set, 1 warning (0.00 sec)

After the index change the query flies:

51 rows in set (0.00 sec)

And the explain is using the new index:

+------+-------------+------------------+--------+------------------+------------------+---------+------------------------------------+--------+-------------+
| id   | select_type | table            | type   | possible_keys    | key              | key_len | ref                                | rows   | Extra       |
+------+-------------+------------------+--------+------------------+------------------+---------+------------------------------------+--------+-------------+
|    1 | SIMPLE      | abuse_filter_log | ref    | filter_timestamp | filter_timestamp | 66      | const                              | 119340 | Using where |
|    1 | SIMPLE      | abuse_filter     | eq_ref | PRIMARY          | PRIMARY          | 8       | enwiki.abuse_filter_log.afl_filter |      1 | Using where |
+------+-------------+------------------+--------+------------------+------------------+---------+------------------------------------+--------+-------------+
2 rows in set (0.01 sec)

s1 eqiad progress

  • labsdb1011
  • labsdb1010
  • labsdb1009
  • dbstore1003
  • dbstore1002
  • dbstore1001
  • db1124
  • db1119
  • db1118
  • db1106
  • db1105
  • db1099
  • db1089
  • db1083
  • db1080
  • db1067

@Marostegui Thanks, that's nice to hear! Given that queries like that one are pretty common, this is surely a huge performance boost.

@Marostegui Thanks, that's nice to hear! Given that queries like that one are pretty common, this is surely a huge performance boost.

Only two hosts pending on s1, db1089 and db1119 - I am going to give it some time to make sure no other slow queries show up for this table (which I am not seeing so far on tendril - in fact, they are disappearing as we alter more and more s1 hosts, but just in case).

db1119 has been altered, only db1089 left on s1. I have not seen any new slow queries popping up for enwiki hosts on tendril, in fact, I am seeing less (yay!).
Going to give it a few more hours before finishing with db1089 (and then the master)

No more slow queries on enwiki apart from db1089 which isn't altered, so I am going to go ahead and alter that one and the master to get enwiki over with.

Mentioned in SAL (#wikimedia-operations) [2019-02-27T14:01:07Z] <marostegui> Change indexes on abuse_filter_log on db1089 - T187295

s7 comes next as it has shows often slow queries there too for arwiki and eswiki

Mentioned in SAL (#wikimedia-operations) [2019-02-27T14:25:59Z] <marostegui> Deploy schema change on abuse_filter_log on s7 codfw - lag will be generated on codfw - T187295

s7 eqiad progress

  • labsdb1011
  • labsdb1010
  • labsdb1009
  • dbstore1003
  • dbstore1002
  • db1125
  • db1116
  • db1101
  • db1098
  • db1094
  • db1090
  • db1086
  • db1079
  • db1062

s4 eqiad progress

  • labsdb1011
  • labsdb1010
  • labsdb1009
  • dbstore1004
  • dbstore1002
  • db1125
  • db1121
  • db1103
  • db1102
  • db1097
  • db1091
  • db1084
  • db1081
  • db1068

Mentioned in SAL (#wikimedia-operations) [2019-02-28T08:22:20Z] <marostegui> Change abuse_filter_log indexes on s3 codfw, lag will appear on codfw - T187295

s3 eqiad progress

  • labsdb1011
  • labsdb1010
  • labsdb1009
  • dbstore1004
  • dbstore1002
  • db1124
  • db1123
  • db1095
  • db1077
  • db1075
  • db1078

All done - I will keep an eye on tendril for the next 24h before closing this to make sure nothing pops up.

Everything looks good!