Page MenuHomePhabricator

S8 replication issues leading to rows missing during eqiad -> codfw switch (Was: "A few lexemes disappeared")
Closed, ResolvedPublic

Description

I noticed strange thing. Lexemes from L20540 to L20543 just disappeared. They were neither merged, nor deleted. They for sure existed because they were created by me and I document every Polish lexeme I created on this page. As far as I remember they were created 13 September (after datacenter switch) and they disappeared 10/11 October (after datacenter switch back)‎. Could it be that this is related to https://wikitech.wikimedia.org/wiki/Switch_Datacenter#Schedule_for_2018_switch ?

DBAs updates:

Action plan: T206743#4658146
Missing rows timeline: T206743#4658274
Cloning hosts progress: T206743#4658362
The only section affected was s8 - T206743#4665589
All pooled replicas are now fixed - T206743#4666493
Al tables on s8 master (db1071) fixed (apart from pagelinks and wb_terms which are in progress): T206743#4673983
All core hosts fixed: T206743#4677827ç
Second round of checks: T206743#4685983 T206743#4687946
Third round of big tables done, only pending wb_terms: T206743#4688580
Empty tables that need no checking: T206743#4690844
All tables checked and confirmed consistent: T206743#4691048

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

Mentioned in SAL (#wikimedia-operations) [2018-10-12T18:25:26Z] <addshore> running modified attachLatest.php script over ~9000 pages on wikidatawiki (with added wait for slaves) T206743

I'll run a maint script to fix the page_latest of the 9000ish pages that have an incorrect one so that we don't end up with more inconsistencies / broken stuff moving forward.

But what will happen with the correct data we have in codfw??

I'll run a maint script to fix the page_latest of the 9000ish pages that have an incorrect one so that we don't end up with more inconsistencies / broken stuff moving forward.

But what will happen with the correct data we have in codfw??

Well after the script run both DCs should be correct.

Essentially what has happened is:

  • Switch from codfw to eqiad, resulting in the missing transactions
  • One of the pages that had missing updates was edited during that time, making a new revision (with a newer rev id) and bumping the page_latest
  • When we then added the transactions back, the previous updates to the page table changes the page_latest back to an ID prior to what is currently the latest.

Once the script has finished the fallout of this whole thing should be that there are some inconsistencies with the pages / history for the pages listed in T206743#4661906 which were edited during this odd time between the switch back to eqiad and the adding of these revisions / running of this script.

Mentioned in SAL (#wikimedia-operations) [2018-10-12T18:37:06Z] <addshore> modified attachLatest.php script finished running over 9395 pages T206743

Using the example that I had in T206743#4661943 the pages that were fixable should now be fixed

MariaDB [wikidatawiki]> select page_id, page_title, page_latest from page where page_id = 99480;
+---------+------------+-------------+
| page_id | page_title | page_latest |
+---------+------------+-------------+
|   99480 | Q97215     |   745463455 |
+---------+------------+-------------+
1 row in set (0.00 sec)

MariaDB [wikidatawiki]> select rev_id from revision where rev_page = 99480 order by rev_id desc limit 2;
+-----------+
| rev_id    |
+-----------+
| 745463455 |
| 737776142 |
+-----------+
2 rows in set (0.00 sec)

@Marostegui I guess you could confirm that using the powers you demonstrated in T206743#4662022 ? :)

root@neodymium:/home/marostegui# ./section s8 | while read host port; do echo $host; mysql.py -h$host:$port wikidatawiki -BN -e "select page_id, page_title, page_latest from page where page_id = 99480;";done
labsdb1011.eqiad.wmnet
99480	Q97215	745463455
labsdb1010.eqiad.wmnet
99480	Q97215	745463455
labsdb1009.eqiad.wmnet
99480	Q97215	745463455
dbstore2001.codfw.wmnet
99480	Q97215	745463455
dbstore1002.eqiad.wmnet
99480	Q97215	745463455
db2094.codfw.wmnet
99480	Q97215	745463455
db2086.codfw.wmnet
99480	Q97215	745463455
db2085.codfw.wmnet
99480	Q97215	745463455
db2083.codfw.wmnet
99480	Q97215	745463455
db2082.codfw.wmnet
99480	Q97215	745463455
db2081.codfw.wmnet
99480	Q97215	745463455
db2080.codfw.wmnet
99480	Q97215	745463455
db2079.codfw.wmnet
99480	Q97215	745463455
db2045.codfw.wmnet
99480	Q97215	745463455
db1124.eqiad.wmnet
99480	Q97215	745463455
db1116.eqiad.wmnet
99480	Q97215	745463455
db1109.eqiad.wmnet
99480	Q97215	745463455
db1104.eqiad.wmnet
99480	Q97215	745463455
db1101.eqiad.wmnet
99480	Q97215	745463455
db1099.eqiad.wmnet
99480	Q97215	745463455
db1092.eqiad.wmnet
99480	Q97215	745463455
db1087.eqiad.wmnet
99480	Q97215	745463455
db1071.eqiad.wmnet
99480	Q97215	745463455

The item I gave as an example above still shows the wrong image:

https://www.wikidata.org/wiki/Q2058295

It is also missing other data. Note this diff, the most recent edit (as I write):

https://www.wikidata.org/w/index.php?title=Q2058295&diff=762891812&oldid=745457545

as it was made with Mix'n'Match, only one ID should have been added.

Change 467249 had a related patch set uploaded (by Marostegui; owner: Marostegui):
[operations/mediawiki-config@master] db-eqiad.php: Depool db1109

https://gerrit.wikimedia.org/r/467249

Change 467249 merged by jenkins-bot:
[operations/mediawiki-config@master] db-eqiad.php: Depool db1109

https://gerrit.wikimedia.org/r/467249

Mentioned in SAL (#wikimedia-operations) [2018-10-15T05:16:12Z] <marostegui> Stop MySQL on db1109 for recloning - T206743

Change 467252 had a related patch set uploaded (by Marostegui; owner: Marostegui):
[operations/mediawiki-config@master] db-eqiad.php: Slowly repool db1109

https://gerrit.wikimedia.org/r/467252

Change 467252 merged by jenkins-bot:
[operations/mediawiki-config@master] db-eqiad.php: Slowly repool db1109

https://gerrit.wikimedia.org/r/467252

Change 467260 had a related patch set uploaded (by Marostegui; owner: Marostegui):
[operations/mediawiki-config@master] db-eqiad.php: Fully repool db1109

https://gerrit.wikimedia.org/r/467260

Change 467260 merged by jenkins-bot:
[operations/mediawiki-config@master] db-eqiad.php: Fully repool db1109

https://gerrit.wikimedia.org/r/467260

Change 467261 had a related patch set uploaded (by Marostegui; owner: Marostegui):
[operations/mediawiki-config@master] db-eqiad.php: Depool db1104

https://gerrit.wikimedia.org/r/467261

Change 467261 merged by jenkins-bot:
[operations/mediawiki-config@master] db-eqiad.php: Depool db1104

https://gerrit.wikimedia.org/r/467261

Mentioned in SAL (#wikimedia-operations) [2018-10-15T07:12:16Z] <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1104 - T206743 (duration: 00m 49s)

Mentioned in SAL (#wikimedia-operations) [2018-10-15T07:24:08Z] <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1104 - T206743 (duration: 00m 48s)

Change 467285 had a related patch set uploaded (by Marostegui; owner: Marostegui):
[operations/mediawiki-config@master] db-eqiad.php: Slowly repool db1104

https://gerrit.wikimedia.org/r/467285

@Pigsonthewing Please don't get worry- as I said at T206743#4658620 only primary data was recovered, cache tables may show still wrong data until we fully recover everything this week, showing some weird data. Please be patient, but don't touch anything that seems wrong, as it will only complicate the full recovery we are doing at the moment.

Once the recovery is fully done, we will announce it here and you should have everything back- however if you edit or try to recover manually the edits, it will generate new revisions and that version will stay like that- as it will be the latest one (older versions will be stored in the history- and it may show as if you vandalized (removed content) from an article.

I wrote a little crappy but effective query that I can just run on all of the shards to see if anything else was affected.

select max(rev_id), min(rev_id), count(*), max(rev_id) - min(rev_id) as diff from ( select rev_id, rev_timestamp from revision order by rev_id DESC limit 20000000 ) as something where rev_timestamp > 20180913090816 and rev_timestamp < 20180913095825;

It shows us what happened to the rev_id in the ~50 minute period that we are concerned with.

S1 - enwiki

+-------------+-------------+----------+------+
| max(rev_id) | min(rev_id) | count(*) | diff |
+-------------+-------------+----------+------+
|   859332479 |   859327787 |     4604 | 4692 |
+-------------+-------------+----------+------+

S2 - ptwiki

+-------------+-------------+----------+------+
| max(rev_id) | min(rev_id) | count(*) | diff |
+-------------+-------------+----------+------+
|    53121516 |    53121457 |       58 |   59 |
+-------------+-------------+----------+------+

S3 - afwiki

+-------------+-------------+----------+------+
| max(rev_id) | min(rev_id) | count(*) | diff |
+-------------+-------------+----------+------+
|     1814203 |     1814188 |       16 |   15 |
+-------------+-------------+----------+------+

S4 - commonswiki

+-------------+-------------+----------+------+
| max(rev_id) | min(rev_id) | count(*) | diff |
+-------------+-------------+----------+------+
|   319952253 |   319947702 |     4387 | 4551 |
+-------------+-------------+----------+------+

S5 - dewiki

+-------------+-------------+----------+------+
| max(rev_id) | min(rev_id) | count(*) | diff |
+-------------+-------------+----------+------+
|   180869513 |   180868285 |     1188 | 1228 |
+-------------+-------------+----------+------+

S6 - frwiki

+-------------+-------------+----------+------+
| max(rev_id) | min(rev_id) | count(*) | diff |
+-------------+-------------+----------+------+
|   152158909 |   152157584 |     1301 | 1325 |
+-------------+-------------+----------+------+

S7 - arwiki

+-------------+-------------+----------+------+
| max(rev_id) | min(rev_id) | count(*) | diff |
+-------------+-------------+----------+------+
|    30515544 |    30515320 |      220 |  224 |
+-------------+-------------+----------+------+

So it looks like s8 was indeed the only issue @Marostegui

Change 467285 merged by jenkins-bot:
[operations/mediawiki-config@master] db-eqiad.php: Slowly repool db1104

https://gerrit.wikimedia.org/r/467285

Mentioned in SAL (#wikimedia-operations) [2018-10-15T08:46:38Z] <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1104 - T206743 (duration: 00m 49s)

@Addshore The main task now is to check pages and pages from revisions that were lost and then restored, and check edits done since last wednesday on those pages, as, while the data has not been lost, those may have put an edit over an older version as the latest, hiding some data. We need to generate a list of Qs to review.

Mentioned in SAL (#wikimedia-operations) [2018-10-15T09:46:16Z] <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1092 for recloning - T206743 (duration: 00m 49s)

@Addshore The main task now is to check pages and pages from revisions that were lost and then restored, and check edits done since last wednesday on those pages, as, while the data has not been lost, those may have put an edit over an older version as the latest, hiding some data. We need to generate a list of Qs to review.

I'll generate a list once all slaves that serve mw traffic are re imaged :)

I'll generate a list once all slaves that serve mw traffic are re imaged :)

This is true right now for all pooled read only replicas, but I need to fix the master.

Mentioned in SAL (#wikimedia-operations) [2018-10-15T13:16:48Z] <jynus> stopping db1092 and db1087 in sync T206743

So as Jaime said at T206743#4666169 all the pooled replicas in eqiad have now the content from codfw, so that is now consistent.
What is pending is db1087 (depooled) which is the master for db1124 (labsdb master) and labsdb, and of course the master, db1071 which doesn't receive reads.
So labsdb still have inconsistencies.

We are now working on fixing those, db1087 which will automatically (via replication) will fix db1124 and labs.

Mentioned in SAL (#wikimedia-operations) [2018-10-16T06:05:03Z] <jynus> stopping db1092 and db1087 in sync T206743

Update from yesterday at around 14:00UTC
@jcrespo has done an amazing job of manually checking and fixing most of the tables on db1087 (which is the labs master, and it is not as easy to reclone as the others). He's gone thru all the tables to check and fix them.
Right all the tables apart from pagelinks and wb_terms are supposed to be fixed (although replication is broken on db1124 sanitarium master for wb_items_per_site, which is kind of expected with such amount of rows to check - we are looking at it).

pagelinks and wb_terms are huge tables, and they cannot be diffed at once because our mysql client reaches OOM, so these checks need to be split on smaller diffs, that's why it is taking a bit longer.

Thanks Jaime for this huge work.

Update from Jaime at 12:10 UTC
All tables on s8 master fixed except pagelinks and wb_terms

on db1124 with instance s8 we have a repliation error as

Last_Error: Could not execute Delete_rows_v1 event on table wikidatawiki.pagelinks; Can't find record in 'pagelinks', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log db1087-bin.003073, end_log_pos 582738698

Update from 17th at 19:04 from Jaime:
all tables except wb_terms, which is half done, should be equal on the s8 master

on db1124 with instance s8 we have a repliation error as

Last_Error: Could not execute Delete_rows_v1 event on table wikidatawiki.pagelinks; Can't find record in 'pagelinks', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log db1087-bin.003073, end_log_pos 582738698

Where was this error message logged? I have been looking for an error log on db1124 without success.

on db1124 with instance s8 we have a repliation error as

Last_Error: Could not execute Delete_rows_v1 event on table wikidatawiki.pagelinks; Can't find record in 'pagelinks', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log db1087-bin.003073, end_log_pos 582738698

Where was this error message logged? I have been looking for an error log on db1124 without success.

Probably on show slave status, but that error was fixed already.
This host is sanitarium, so it is not user facing error (it does create lag on labs though).

This host is expected to fail while it is getting fixed.
Right now there is another error there, as replication got broken again.

Update from Jaime 18th Oct 16:05: s8 core hosts all finished getting fixed (pending labs)

Update: pagelinks is now being re-imported on labs (this table is around 136G) so it will take a while

After many fixes during the weekend, wb_terms also fixed on labs, all hosts should be consistent now, doing some extra checks to verify everything is good.

@Addshore did you sent to wikidata users the list you compiled to check bot activity?

The following tables have been re-checked on db1092 (recloned from codfw) vs db1087 (manually fixed):

abuse_filter_log
archive
babel
bv2013_edits
bv2015_edits
bv2017_edits
categorylinks
comment
cu_log
ipblocks
langlinks
module_deps
object_cache
querycache
querycachetwo
recentchanges
redirect
revtag
translate_reviews
translate_groupreviews
translate_groupstats
translate_messageindex
translate_sections
valid_tag
wb_changes
wb_changes_dispatch
wb_id_counters
wb_property_info
wbqc_constraints
wbs_propertypairs

On-going:

logging
revision
slots
text

Found differences on:
abuse_filter_log which has 66 rows more on db1092 than on db1087 (and sanitarium, and labs) and db1071 (master).

@Addshore did you sent to wikidata users the list you compiled to check bot activity?

Yes

These tables are fine:

logging
revision
slots
text

Starting to check:

abuse_filter
category 
change_tag 
change_tag_def 
cu_changes 
externallinks 
geo_tags
imagelinks
ip_changes 
iwlinks 
linter 
log_search 
ores_classification
page
page_props 
page_restrictions
pagelinks 
spoofuser 
tag_summary 
templatelinks
user
user_newtalk
user_properties
watchlist
wb_changes_subscription 
wb_items_per_site
wbc_entity_usage 
wb_terms

Mentioned in SAL (#wikimedia-operations) [2018-10-23T06:39:09Z] <marostegui> Stop replication on db1092 and db1087 for checking T206743

jcrespo lowered the priority of this task from High to Medium.Oct 23 2018, 12:56 PM

We believe this to be fixed fully both wikireplicas and on production, but will not close until extra checks confirm so.

More tables to check after the ones checked at: T206743#4685983
abuse_filter_log (T206743#4685983) was fixed by Jaime (I have rechecked again and it is indeed now consistent).

The following tables are clean between db1087 and db1092:

abuse_filter
category
change_tag_def
cu_changes
externallinks
geo_tags
imagelinks
iwlinks
linter
log_search
ores_classification
page
page_props
page_restrictions
pagelinks
spoofuser
tag_summary
templatelinks
user
user_newtalk
user_properties
watchlist
wb_changes_subscription
wb_items_per_site
wbc_entity_usage

The following tables reported differences and have already been fixed by Jaime (and confirmed after a second check by me)

change_tag

On-going check for the last table (which takes several hours:

wb_terms

wb_terms is finished and it is all clean.

The following tables are empty so no need to check:

actor
config
filearchive
filejournal
globalblocks
hidden
image
image_comment_temp
interwiki
ipblocks_restrictions
job
l10n_cache
oldimage
revision_actor_temp
searchindex
securepoll_cookie_match
securepoll_options
securepoll_questions
securepoll_strike
securepoll_votes
transcache
transcode
updatelog
uploadstash

Final round of tables that have been checked and are clean:

abuse_filter_action
abuse_filter_history
betafeatures_user_counts
content
content_models
global_block_whitelist
ipblocks
math
mathoid
ores_model
protected_titles
querycache_info
revision_comment_temp
securepoll_elections
securepoll_entity
securepoll_lists
securepoll_msgs
securepoll_voters
securepoll_properties
sites
site_identifiers
site_stats
slot_roles
templatelinks
translate_metadata
user_former_groups
user_groups

At this point, all the tables in s8 have been checked and confirmed clean.

Marostegui reassigned this task from Marostegui to jcrespo.

So the scope of this ticket is all done.
All the tables have been fixed and checked to ensure they are.

This has been a complicated work, specially for those hosts that were not easy to reclone (the master and the labs infra). I would like to give a big thank you to @jcrespo who's done a tremendous work on getting those fixed row by row.
Thanks also @Addshore for all the help. Really appreciate it.

Next steps are:

The issue with https://www.wikidata.org/wiki/Q2058295 which I described on 14 October, persists.

@Addshore I thought you had communicated to wikidata users about that? Apparently not, or @Pigsonthewing didn't see it, could you link your messages to him?

@Addshore I thought you had communicated to wikidata users about that? Apparently not, or @Pigsonthewing didn't see it, could you link your messages to him?

I'll leave that to @Lea_Lacroix_WMDE. This is still on the needs announcement column for us.
The related pages that I created can be found @ https://www.wikidata.org/wiki/User:Addshore/2018/10/DC_Switch_Issue

@Pigsonthewing I hope my comment at Wikidata Village Pump was helpful- if you think that is ok, I would suggest closing this task, and open a different one to track the merges of old history (this was to track the recovery from backups)?

on db1116 we still have the backup dir:

drwxr-xr-x  8 mysql mysql 8.0K Oct 11 13:21 sqldata.s8_BACKUP_T206743

I guess we shall it remove now, as this ticket is resolved (and we still remember the issue) ?

If there is no space issues on that host, there is no harm in leaving it a bit longer I would say.

Mentioned in SAL (#wikimedia-operations) [2018-12-12T08:37:00Z] <marostegui> Remove old backup directory from db1116 - T206743