Page MenuHomePhabricator

Convert LQT pages on enwiktionary to Flow
Closed, ResolvedPublic

Description

enwiktionary LQT pages have all been moved to /LQT Archive subpages, and all orphaned threads have been re-attached to talk pages (excluding some spam).

We can now run the conversion script which will turn all these pages into Flow boards. For history preservation, the header will be moved to /LQT Archive/LQT Archive 1

Related Objects

StatusSubtypeAssignedTask
ResolvedNone
OpenNone
OpenNone
ResolvedTrizek-WMF
DuplicateNone
OpenNone
ResolvedTrizek-WMF
DuplicateNone
ResolvedSgs
ResolvedSgs
ResolvedTrizek-WMF
OpenNone
ResolvedMimurawil
ResolvedTchanders
In ProgressNone
OpenNone
DeclinedNone
ResolvedEsanders
OpenNone
ResolvedEsanders
ResolvedNone
ResolvedDLynch
OpenDLynch
ResolvedUrbanecm_WMF
ResolvedDLynch
ResolvedDLynch
OpenEsanders
ResolvedRyasmeen
ResolvedUrbanecm_WMF
ResolvedDLynch
ResolvedTrizek-WMF
Resolvedzoe
ResolvedRyasmeen
ResolvedBUG REPORTEtonkovidova
ResolvedTrizek-WMF
ResolvedNone
ResolvedPRODUCTION ERRORhubaishan
ResolvedTrizek-WMF
ResolvedDLynch
ResolvedTrizek-WMF
Resolvedppelberg
ResolvedQuiddity
Resolvedppelberg
Resolvedzoe
Resolvedzoe
ResolvedRyasmeen
Resolvedzoe
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
Resolvedmatmarex
OpenNone

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

I tried to investigate that error - the conclusion I got was that somehow the topic isn't being properly saved to the database (and thus it's failing to find it when importing other parts of the topic), but the way in which Flow code handles the database is so abstract that I couldn't make any headway at figuring out why that would happen (and I couldn't reproduce the issue locally)

Looks like this is a database replication issue, where the topic ID is being read from a replica database that it hasn't been inserted into yet.

What's happening

The error is thrown from PostRevisionTopicHistoryIndex::findTopicId:

try {
	$root = $post->getCollection()->getRoot();
} catch ( DataModelException ) {
	// in some cases, we may fail to find root post from the current
	// object (e.g. data has already been removed)
	// try to find if via parent, in that case
	$parentId = $post->getReplyToId();
	if ( $parentId === null ) {
		throw new DataModelException( 'Unable to locate root for post ' .
			$post->getCollectionId() );
	}
	...
}

A DataModelException is thrown in the try block. In the catch block, $parentId is null, which happens when $post is the root. In other words, the catch is not designed to work in the situation that we are encountering, and instead the try should not throw an error.

Why is the try throwing? I can mimic this locally by returning empty array from TreeRepository::findRootPaths, which is what happens when the looked-up ID is not found in the database or in the cache. This could happen if the new topic is not yet in the replica database, which the lookup is hard-coded to use (code here).

PostRevisionTopicHistoryIndex gets to TreeRepository via this code path:

#0 /var/www/html/mediawiki/extensions/Flow/includes/Repository/TreeRepository.php(288): Flow\Repository\TreeRepository->findRootPaths(Array)
#1 /var/www/html/mediawiki/extensions/Flow/includes/Repository/TreeRepository.php(322): Flow\Repository\TreeRepository->findRootPath(Object(Flow\Model\UUID))
#2 /var/www/html/mediawiki/extensions/Flow/includes/Collection/PostCollection.php(29): Flow\Repository\TreeRepository->findRoot(Object(Flow\Model\UUID))
#3 /var/www/html/mediawiki/extensions/Flow/includes/Collection/PostCollection.php(60): Flow\Collection\PostCollection->getWorkflowId()
#4 /var/www/html/mediawiki/extensions/Flow/includes/Data/Index/PostRevisionTopicHistoryIndex.php(82): Flow\Collection\PostCollection->getRoot()
#5 /var/www/html/mediawiki/extensions/Flow/includes/Data/Index/PostRevisionTopicHistoryIndex.php(48): Flow\Data\Index\PostRevisionTopicHistoryIndex->findTopicId(Object(Flow\Model\PostRevision))
...

Was the topic actually inserted?

No. @Esanders confirmed there is no new topic on enwiktionary corresponding to yesterday's maintenance script runs.

It looks like this is because the insert is rolled back. When I hard-code TreeRepository::findRootPaths to fail locally, I get an error from the transaction profiler: DB transaction writes or callbacks still pending (Flow\\Data\\Storage\\BasicDbStorage::insert (flow_workflow), Flow\\Data\\Storage\\BasicDbStorage::insert (flow_topic_list), Flow\\Data\\Storage\\RevisionStorage::insert, Flow\\Data\\Storage\\PostRevisionStorage::insertRelated, Flow\\Repository\\TreeRepository::insert).

What should we do?

We should try doing the lookup on the primary database, but only when running the maintenance script, to avoid performance issues due to web requests on the live sites.

Having discussed this with @kostajh, it seems likely that we'll encounter this problem in other parts of Flow too, so the safest thing would be to configure always accessing the primary database when running the maintenance script.

A $forcePrimary flag can be set on Flow\DbFactory. It is used by BoardMover and SubmissionHandler, but I don't think it would be good enough to use it for Converter, because other services would still be initialized without it. We could set forcePrimary on initialisation when running a maintenance script. This seems heavy-handed, but should solve the problem.

Yep, that matches the conclusion I had come to, that it probably had to do with database replication.

Change #1190724 had a related patch set uploaded (by Tchanders; author: Tchanders):

[mediawiki/extensions/Flow@master] DbFactory: Use primary DB when running maintenance scripts

https://gerrit.wikimedia.org/r/1190724

Change #1190994 had a related patch set uploaded (by Esanders; author: Tchanders):

[mediawiki/extensions/Flow@wmf/1.45.0-wmf.20] DbFactory: Use primary DB when running maintenance scripts

https://gerrit.wikimedia.org/r/1190994

Change #1190995 had a related patch set uploaded (by Esanders; author: Tchanders):

[mediawiki/extensions/Flow@wmf/1.45.0-wmf.19] DbFactory: Use primary DB when running maintenance scripts

https://gerrit.wikimedia.org/r/1190995

Change #1190724 merged by jenkins-bot:

[mediawiki/extensions/Flow@master] DbFactory: Use primary DB when running maintenance scripts

https://gerrit.wikimedia.org/r/1190724

Change #1190994 merged by jenkins-bot:

[mediawiki/extensions/Flow@wmf/1.45.0-wmf.20] DbFactory: Use primary DB when running maintenance scripts

https://gerrit.wikimedia.org/r/1190994

Change #1190995 merged by jenkins-bot:

[mediawiki/extensions/Flow@wmf/1.45.0-wmf.19] DbFactory: Use primary DB when running maintenance scripts

https://gerrit.wikimedia.org/r/1190995

Mentioned in SAL (#wikimedia-operations) [2025-09-24T20:04:40Z] <esanders@deploy1003> Started scap sync-world: Backport for [[gerrit:1190994|DbFactory: Use primary DB when running maintenance scripts (T405080)]], [[gerrit:1190995|DbFactory: Use primary DB when running maintenance scripts (T405080)]]

Mentioned in SAL (#wikimedia-operations) [2025-09-24T20:10:44Z] <esanders@deploy1003> esanders: Backport for [[gerrit:1190994|DbFactory: Use primary DB when running maintenance scripts (T405080)]], [[gerrit:1190995|DbFactory: Use primary DB when running maintenance scripts (T405080)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there.

Mentioned in SAL (#wikimedia-operations) [2025-09-24T20:16:26Z] <esanders@deploy1003> Finished scap sync-world: Backport for [[gerrit:1190994|DbFactory: Use primary DB when running maintenance scripts (T405080)]], [[gerrit:1190995|DbFactory: Use primary DB when running maintenance scripts (T405080)]] (duration: 11m 45s)

The script ran successfully on one talk page after patching.

I proceeded to run the script on the remaining ~38 talk pages, but it threw a different error:

[2025-09-24 20:30:06] Considering for conversion: pages with the 'use-liquid-threads' property
[2025-09-24 20:30:06] Starting full wiki LQT conversion of all LiquidThreads pages
[2025-09-24 20:30:06] Archiving page from User talk:TheDaveRoss/LQT Archive to User talk:TheDaveRoss/LQT Archive/LQT Archive 1
[2025-09-24 20:30:08] Importing to User talk:TheDaveRoss/LQT Archive
[2025-09-24 20:30:08] Importing header
[2025-09-24 20:30:08] Imported 2 revisions for header
[2025-09-24 20:30:08] Exception while importing: User talk:TheDaveRoss/LQT Archive
[2025-09-24 20:30:08] Wikimedia\Rdbms\DBUnexpectedError: No atomic section is open (got Flow\Import\PageImportState) in /srv/mediawiki/php-1.45.0-wmf.20/includes/libs/rdbms/database/TransactionManager.php:360
Stack trace:
#0 /srv/mediawiki/php-1.45.0-wmf.20/includes/libs/rdbms/database/Database.php(2259): Wikimedia\Rdbms\TransactionManager->onCancelAtomicBeforeCriticalSection(Object(Wikimedia\Rdbms\DatabaseMySQL), 'Flow\\Import\\Pag...')
#1 /srv/mediawiki/php-1.45.0-wmf.20/includes/libs/rdbms/database/DBConnRef.php(127): Wikimedia\Rdbms\Database->cancelAtomic('Flow\\Import\\Pag...')
#2 /srv/mediawiki/php-1.45.0-wmf.20/includes/libs/rdbms/database/DBConnRef.php(746): Wikimedia\Rdbms\DBConnRef->__call('cancelAtomic', Array)
#3 /srv/mediawiki/php-1.45.0-wmf.20/extensions/Flow/includes/Import/PageImportState.php(270): Wikimedia\Rdbms\DBConnRef->cancelAtomic('Flow\\Import\\Pag...')
#4 /srv/mediawiki/php-1.45.0-wmf.20/extensions/Flow/includes/Import/TalkpageImportOperation.php(116): Flow\Import\PageImportState->rollback()
#5 /srv/mediawiki/php-1.45.0-wmf.20/extensions/Flow/includes/Import/Importer.php(114): Flow\Import\TalkpageImportOperation->import(Object(Flow\Import\PageImportState))
#6 /srv/mediawiki/php-1.45.0-wmf.20/extensions/Flow/includes/Import/Converter.php(215): Flow\Import\Importer->import(Object(Flow\Import\LiquidThreadsApi\ImportSource), Object(MediaWiki\Title\Title), Object(MediaWiki\User\User), Object(Flow\Import\SourceStore\FileImportSourceStore))
#7 /srv/mediawiki/php-1.45.0-wmf.20/extensions/Flow/includes/Import/Converter.php(157): Flow\Import\Converter->doConversion(Object(MediaWiki\Title\Title), NULL)
#8 /srv/mediawiki/php-1.45.0-wmf.20/extensions/Flow/includes/Import/Converter.php(113): Flow\Import\Converter->convert(Object(MediaWiki\Title\Title), false, false)
#9 /srv/mediawiki/php-1.45.0-wmf.20/extensions/Flow/maintenance/convertAllLqtPages.php(111): Flow\Import\Converter->convertAll(Object(AppendIterator), false, false)
#10 /srv/mediawiki/php-1.45.0-wmf.20/maintenance/includes/MaintenanceRunner.php(696): Flow\Maintenance\ConvertAllLqtPages->execute()
#11 /srv/mediawiki/php-1.45.0-wmf.20/maintenance/run.php(53): MediaWiki\Maintenance\MaintenanceRunner->run()
#12 /srv/mediawiki/multiversion/MWScript.php(221): require_once('/srv/mediawiki/...')
#13 {main}
		try {
			$state->begin();
			$this->importHeader( $state, $header );
			$state->commit();
			$state->postprocessor->afterHeaderImported( $state, $header );
			$imported++;
		} catch ( ImportSourceStoreException $e ) {
			// errors from the source store are more serious and should
			// not just be logged and swallowed.  This may indicate that
			// we are not properly recording progress.
			$state->rollback();
			throw $e;
             }

Well, that didn't help anything - it looks like the original error was swallowed by the rollback fail. Digging into $state->commit()

	public function commit() {
		$this->dbw->endAtomic( __CLASS__ );
		$this->sourceStore->save();
		$this->flushDeferredQueue();
	}

If $this->sourceStore->save() failed and threw an ImportSourceStoreException, that would produce that stack trace. Digging one step down then:

	public function save() {
		$bytesWritten = file_put_contents( $this->filename, json_encode( $this->data ) );
		if ( $bytesWritten === false ) {
			throw new Exception( 'Could not write out source store to ' . $this->filename );
		}
	}

(Exception is the same class referred to as ImportSourceStoreException two frames above)

So file_put_contents must have returned false. Make sure the file you specified in --logfile is valid and writable. Or just try again - this could well be a transient error.

Thanks @Pppery - we did use a different --logfile param on the single script vs all-pages script, so this looks likely. We'll try again with the same log file as the single script...

We started fixing the pages one-by-one using convertLqtPageOnLocalWiki, thinking we'd do that quickly then try convertAllLqtPages on another wiki (probably huwiki, once they're ready). However, the script has been running slowly.

It would be helpful to know:

  • Is Flow doing some slow query somewhere that we can speed up,
  • Or is the bottleneck in the parsing/editing (in which case it's not trivial to fix)?

Local profiling shows 50% time spent in WikiPage::doUserEditContent, but of course my local tables are so small that an inefficient query wouldn't cause a bottleneck relative to making an edit.

Tchanders updated Other Assignee, added: Tchanders.

We started fixing the pages one-by-one using convertLqtPageOnLocalWiki, thinking we'd do that quickly then try convertAllLqtPages on another wiki (probably huwiki, once they're ready). However, the script has been running slowly.

It would be helpful to know:

  • Is Flow doing some slow query somewhere that we can speed up,
  • Or is the bottleneck in the parsing/editing (in which case it's not trivial to fix)?

Local profiling shows 50% time spent in WikiPage::doUserEditContent, but of course my local tables are so small that an inefficient query wouldn't cause a bottleneck relative to making an edit.

Looks like we won't need to work this out, since there aren't a huge number of threads anyway: T350164#11219358

The standard thing to cross-check after doing a LQT convert is https://en.wiktionary.org/wiki/Special:PrefixIndex?prefix=Thread%3A&namespace=0&hideredirects=1; if LQT has been fully converted then all pages in the Thread namespace should be redirects. That's not the case right now

TLDR: In addition to things you already know, https://en.wiktionary.org/wiki/User_talk:Afc0703 should get the Commander Keane treatment, and the rest is fine noise. I would be interested in knowing why the two threads that failed failed, though, and them being retried if the error is transient.

Before we ran the import we didn't run FlowCreateTemplates.php, this means there are bunch of red-linked templates in the converted content. I have since run FlowCreateTemplates.php on enwikisource.

As Flow stores text as HTML, not wikitext, there is no simple "purge" to fix this and instead there is a FlowReserializeRevisionContent.php, which converts the HTML to wikitext and then back again. I ran this with --dry-run and the output looked good. I than ran it for real against enwikitionary. Some time into the script run I noticed posts not appearing so I aborted.

Example: https://en.wiktionary.org/w/index.php?title=Topic:S37iw3ek29l193jv&topic_showPostId=s37ore3wkwcg1jvq#flow-post-s37ore3wkwcg1jvq

We have been investigating this problem this morning.

Initially I though that the content had been blanked by the conversion script, but after working out how to read the content from External Storage, we can see:
Fetch hex ID from AlphaNumeric:

> echo strtoupper((\Flow\Model\UUID::newFromJsonArray(['alnum'=>'s37ore3wkwcg1jvq']))->getHex());
0522FA25ED54109145DAE6

Fetch rev_content pointer

select rev_content from flow_revision where rev_user_wiki='enwiktionary' and rev_id=unhex('0522FA25ED54109145DAE6');
+------------------------+
| rev_content            |
+------------------------+
| DB://cluster30/4043503 |
+------------------------+

Dump content from ES pointer:

echo "es:DB://cluster30/4043503?flags=utf-8,gzip" | mwscript fetchText.php --wiki=enwiktionary
es:DB://cluster30/4043503?flags=utf-8,gzip
783
<body lang="en" class="mw-content-ltr sitedir-ltr ltr mw-body-content parsoid-body mediawiki mw-parser-output" dir="ltr" data-mw-parsoid-version="0.22.0.0-alpha24" data-mw-html-version="2.8.0" id="mwAA"><section data-mw-section-id="0" id="mwAQ"><p id="mwAg">"Data" in a computer context is not the same thing as "data" in some other contexts. Just as it's quite proper to say <a rel="mw:WikiLink" href="./indexes" title="indexes" id="mwAw">indexes</a> when you're referring to data structures, you shouldn't refer to computer data as plural. After all, what is a computer datum? is it a bit? A byte? The contents of one field in one record of a database? Has anyone since the early days of computing even <i id="mwBA">referred to</i> a computer datum in English?</p></section></body>

As we can see the original comment (https://en.wiktionary.org/w/index.php?title=Thread:User_talk:PalkiaX50/Buffer/reply&oldid=29086118) is still there just fine, but Flow is not rendering it.

My guess is that some cache needs to be purged but wasn't properly.

Possibly. What's also interesting is that in some topics, certain posts appear to have been rebuilt, as evidenced by the fact that their templates are now rendering correctly, while others are blanked:

https://en.wiktionary.org/w/index.php?title=Topic:Pg584g1ur9fogzqh&topic_showPostId=pg5wg4fyjx7kboq3#flow-post-pg5wg4fyjx7kboq3

Note the template https://en.wiktionary.org/wiki/Template:LQT_post_imported_with_different_signature_user appears in this post, which I only imported this morning (Monday), which means this post was successfully reserialized when I ran the script. Other posts in the topic are blank.

I've painstakingly stepped through Flow's code to try to understand how it worked, which was much harder than I expected (the DB fetches are done through several layers of indirection, which probably gives the impression of seeming magical until you try to understand it). And I can confirm there is indeed a cache around Flow lookups of revision content, which seems to have an expiration of 3 days.

And given that the FlowReserializeRevisionContent uses reflection to circumvent restrictions in lower-level code (yes, really, this sort of thing isn't uncommon in Flow code; importing does it too!) and was written years after the rest of the Flow codebase, and may have never been run in production before this (it was written for T209120, which was never done), and the C+2-er gave the ominous warning "I have no way to test this since I don't have an old Flow setup lying around but the code LGTM. I expect you to dry-run it in production :)", it wouldn't surprise me if some cache purge was missed somewhere.

I would suggest waiting for that cache to expire in 3 days and seeing if the content comes back.

@Pppery Thanks for this investigation. Given the content is still recoverable in some for let's do that and see if the caches expire...

Esanders updated the task description. (Show Details)

Okay, it's been 3 days since the script ran, still no dice.

So it's probably not cache. I fiddled around a bit with Flow on a local vagrant instance, and while I managed to get xdebug working locally which I hadn't had before (and will surely be useful in the future!), and also tried to look through the code to see if it gave me any other ideas about what might be going wrong, but I couldn't reproduce the problem and I didn't learn anything useful.

Change #1194180 had a related patch set uploaded (by Esanders; author: Esanders):

[operations/mediawiki-config@master] Invalidate Flow cache on enwiktionary

https://gerrit.wikimedia.org/r/1194180

Change #1194180 merged by jenkins-bot:

[operations/mediawiki-config@master] Invalidate Flow cache on enwiktionary

https://gerrit.wikimedia.org/r/1194180

Mentioned in SAL (#wikimedia-operations) [2025-10-07T13:03:49Z] <esanders@deploy2002> Started scap sync-world: Backport for [[gerrit:1194180|Invalidate Flow cache on enwiktionary (T405080)]]

Mentioned in SAL (#wikimedia-operations) [2025-10-07T13:08:16Z] <esanders@deploy2002> esanders: Backport for [[gerrit:1194180|Invalidate Flow cache on enwiktionary (T405080)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there.

Mentioned in SAL (#wikimedia-operations) [2025-10-07T13:13:56Z] <esanders@deploy2002> Finished scap sync-world: Backport for [[gerrit:1194180|Invalidate Flow cache on enwiktionary (T405080)]] (duration: 10m 07s)

I think I have it solved - the rev_flags row on the updated rows is missing the external flag, which is required for the ExternalStorage lookup to happen. Here's a sample of random rows:

+------------------------+------------------------------------------+
| rev_content            | rev_flags                                |
+------------------------+------------------------------------------+
| DB://cluster31/4045076 | utf-8,gzip,html                          |
| DB://cluster31/4045072 | utf-8,gzip,html                          |
| DB://cluster31/4062748 | utf-8,gzip,html,external                 |
| DB://cluster30/4044111 | utf-8,gzip,html                          |
| DB://cluster30/4044110 | utf-8,gzip,html                          |
| DB://cluster31/4062749 | utf-8,gzip,html,external                 |

I've verified all the rev_content in enwiktionary is stored externally (starts with DB://) so we should be good to just run a query to append ,external to anything that doesn't have it:

UPDATE flow_revision
SET rev_flags = CONCAT(rev_flags, ",external")
WHERE rev_user_wiki="enwiktionary" AND rev_flags NOT LIKE "%external%"

Odd. I still can't see how that could happen from looking at the code. But thanks for looking at that (and why wasn't it consistent?)

This bug is probably not worth fixing; just don't run flowReserializeRevisionContent again; make sure to run FlowCreateTemplates first on all remaining wikis so you don't have to.

Ran this:

cumin2024@db2215.codfw.wmnet[flowdb]> UPDATE flow_revision SET rev_flags = CONCAT(rev_flags, ",external") WHERE rev_user_wiki="enwiktionary" AND rev_flags NOT LIKE "%external%" limit 5;
Query OK, 5 rows affected (0.035 sec)
Rows matched: 5  Changed: 5  Warnings: 0

cumin2024@db2215.codfw.wmnet[flowdb]> UPDATE flow_revision SET rev_flags = CONCAT(rev_flags, ",external") WHERE rev_user_wiki="enwiktionary" AND rev_flags NOT LIKE "%external%";
Query OK, 5855 rows affected (1.347 sec)
Rows matched: 5855  Changed: 5855  Warnings: 0

I kept a file just in case we need to revert: P83719

@Esanders any chance of a final push to tidy up the few remaining loose ends here, before it fades from your mind?

Change #1200743 had a related patch set uploaded (by Pppery; author: Pppery):

[operations/mediawiki-config@master] Remove extended autoconfirmed time for Tor on enwiki

https://gerrit.wikimedia.org/r/1200743

Sorry I pasted the wrong ticket number there

Change #1202706 had a related patch set uploaded (by Tchanders; author: Esanders):

[mediawiki/extensions/Flow@master] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1202706

Change #1202709 had a related patch set uploaded (by Tchanders; author: Esanders):

[mediawiki/extensions/Flow@wmf/1.46.0-wmf.1] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1202709

Change #1202709 merged by jenkins-bot:

[mediawiki/extensions/Flow@wmf/1.46.0-wmf.1] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1202709

Mentioned in SAL (#wikimedia-operations) [2025-11-06T14:15:59Z] <lucaswerkmeister-wmde@deploy2002> Started scap sync-world: Backport for [[gerrit:1202717|Update types for WatchArticleHook/UnwatchArticleHook]], [[gerrit:1202709|LQT Import: Fix quadratic time explosion in finding next offset (T405080)]]

Mentioned in SAL (#wikimedia-operations) [2025-11-06T14:18:45Z] <lucaswerkmeister-wmde@deploy2002> lucaswerkmeister-wmde, tchanders: Backport for [[gerrit:1202717|Update types for WatchArticleHook/UnwatchArticleHook]], [[gerrit:1202709|LQT Import: Fix quadratic time explosion in finding next offset (T405080)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there.

Mentioned in SAL (#wikimedia-operations) [2025-11-06T14:23:25Z] <lucaswerkmeister-wmde@deploy2002> Finished scap sync-world: Backport for [[gerrit:1202717|Update types for WatchArticleHook/UnwatchArticleHook]], [[gerrit:1202709|LQT Import: Fix quadratic time explosion in finding next offset (T405080)]] (duration: 07m 26s)

Change #1202706 merged by jenkins-bot:

[mediawiki/extensions/Flow@master] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1202706

Change #1202985 had a related patch set uploaded (by Esanders; author: Esanders):

[operations/mediawiki-config@master] Freeze LiquidThreads on enwiktionary

https://gerrit.wikimedia.org/r/1202985

Change #1202985 merged by jenkins-bot:

[operations/mediawiki-config@master] Freeze LiquidThreads on enwiktionary

https://gerrit.wikimedia.org/r/1202985

Mentioned in SAL (#wikimedia-operations) [2025-11-10T14:04:01Z] <esanders@deploy2002> Started scap sync-world: Backport for [[gerrit:1202985|Freeze LiquidThreads on enwiktionary (T405080)]]

Mentioned in SAL (#wikimedia-operations) [2025-11-10T14:08:08Z] <esanders@deploy2002> esanders: Backport for [[gerrit:1202985|Freeze LiquidThreads on enwiktionary (T405080)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there.

Mentioned in SAL (#wikimedia-operations) [2025-11-10T14:17:50Z] <esanders@deploy2002> Finished scap sync-world: Backport for [[gerrit:1202985|Freeze LiquidThreads on enwiktionary (T405080)]] (duration: 13m 48s)

Change #1203831 had a related patch set uploaded (by Reedy; author: Esanders):

[mediawiki/extensions/Flow@REL1_45] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1203831

Change #1203832 had a related patch set uploaded (by Reedy; author: Esanders):

[mediawiki/extensions/Flow@REL1_44] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1203832

Change #1203833 had a related patch set uploaded (by Reedy; author: Esanders):

[mediawiki/extensions/Flow@REL1_43] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1203833

Change #1203834 had a related patch set uploaded (by Reedy; author: Esanders):

[mediawiki/extensions/Flow@REL1_39] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1203834

Change #1203834 merged by jenkins-bot:

[mediawiki/extensions/Flow@REL1_39] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1203834

Change #1203831 merged by jenkins-bot:

[mediawiki/extensions/Flow@REL1_45] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1203831

Change #1203833 merged by jenkins-bot:

[mediawiki/extensions/Flow@REL1_43] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1203833

Change #1203832 merged by jenkins-bot:

[mediawiki/extensions/Flow@REL1_44] LQT Import: Fix quadratic time explosion in finding next offset

https://gerrit.wikimedia.org/r/1203832

This task is, as best as I can tell, now complete thanks to some additional work done by @Esanders since I last commented here.