Large updates for some things are currently handled sub-ideally or not at all:
- slow: cache purges on creation/deletion
- slow: cache purges on template change
- not at all: link table updates on template change
For large numbers of affected pages, trying to update page_touched records and send squid
purges during a save operation can be prohibitively slow, leading to transaction failures
or replication lag due to the serialized nature of MySQL replication.
As the wikis continue to grow, and templates are commonly used for things like
standardizing categorization, we get requests to manually run refreshLinks to update
category membership and such. That kinda sucks, so we should think about implementing a
purge queue, as discussed in past years but never yet gotten to.
The names of pages to purge or re-link can be sucked out of the database and stuffed in a
queue for processing after the save: generally these operations don't need to be
_immediate_, they just need to happen soonish.
For Wikimedia and other dedicated sites, we can have a daemon or other regular dedicated
process churn through this queue. For third-party sites on a default install it could be
done "in the background" during other page hits or some such, or just left as an