Some wikis have archive tables with duplicate ar_rev_id values. This causes duplicate key errors when running populateContentTables.php.
populateArchiveRevId.php allocated rev_id values by inserting a dummy row into the revision table and then deleting it in the same transaction. It's evident from the MariaDB documentation that this is unsafe in MariaDB 10.1. In fact even ordinary article deletion is unsafe in MariaDB 10.1: if the latest revision is deleted, and then the master DB server is restarted, the autoincrement value will be reset to MAX(rev_id)+1 and so new revision creations will reuse the ID supposedly reserved by ar_rev_id.
According to the MySQL documentation, the autoincrement value will be reset to MAX(...)+1 if a row is inserted or updated with a non-null value specified for the autoincrement column. It is unclear whether this is also true for MariaDB.
The affected wikis are: aawikibooks, cawiki, gotwikibooks, kswikiquote, lvwikibooks, nostalgiawiki, wawikibooks and wikimania2005wiki.
Taking aawikibooks as an example: The deletion log shows that 1749 pages in namespace 8 were deleted in 2007. Another 28 pages were also deleted in 2007. Then one was deleted in 2008. In 2015, 5 pages were deleted, with 9 revisions.. This is a closed, read-only wiki which has not been edited since 2015. ar_rev_id was apparently introduced in 2005, so none of these archive rows should have had ar_rev_id=NULL, and so none should have been affected by populateContentTables.php. But I can't see how it could have been caused by the original deletion in October 2007, since there are conflicting ar_rev_id values for revisions that would have been in the revision table at the same time.
A list of affected archive rows in aawikibooks:
Note that ar_id was only introduced in 2012, it's not exactly monotonic with deletion time. Some deletion times:
firstname.lastname@example.org(aawikibooks)> select ar_id,log_timestamp from logging,archive where log_namespace=ar_namespace and log_title=ar_title and ar_id in (29,985,987,1018,1849,92,3204,6569) order by log_timestamp; +-------+----------------+ | ar_id | log_timestamp | +-------+----------------+ | 29 | 20070108212726 | | 29 | 20070108212726 | | 92 | 20070108212729 | | 92 | 20070108212729 | | 985 | 20070108212804 | | 985 | 20070108212804 | | 987 | 20070108212804 | | 987 | 20070108212804 | | 1018 | 20070108212805 | | 1018 | 20070108212805 | | 1849 | 20070108212826 | | 1849 | 20070108212826 | | 6569 | 20070918192042 | | 6569 | 20070918192042 | | 3204 | 20070918192049 | | 3204 | 20070918192049 | | 6569 | 20080328185334 | +-------+----------------+
In summary, I don't know what happened but it's obviously broken.
The MCR write-both mode, with its unique index on slot_revision_id, should at least prevent conflicting rows from being inserted, although this potentially comes at the cost of throwing an exception on every edit until someone manually advances the autoincrement value for rev_id is advanced beyond MAX(ar_rev_id).
That is to say, if the most recent revision is deleted, and then a master switch is done, all edits would fail until manual action is taken.
In the long term, we will have to stop using ar_rev_id, since evidently we made incorrect assumptions about how DBMSs work. The whole concept of ar_rev_id appears to be invalid.
A quick hack would be to permanently insert a row into the revision table of the affected wikis, with a rev_id specified so as to make room for the extra ar_rev_id values. This wouldn't fix the master switch issue.
When inserting a row into revision, we could do SELECT MAX(ar_rev_id) FROM archive. If the rev_id returned when inserting is less than MAX(ar_rev_id), then we could update the rev_id in the newly-inserted row to some larger value.