Content purges are unreliable
Open, Stalled, HighPublic

Description

This meta-task is to serve as a pointer/blocker so we don't keep having to re-explain the same basic problems in many tickets. These are the elements of the basic underlying problem:

  1. When things are purged out via various MediaWiki mechanisms, this generates HTCP (multicast UDP) traffic to our cache nodes to effect the purge.
  1. UDP is unreliable, especially at high rates multicasted across broad networks and contending with other beefy traffic to the cache boxes on network queues and such. Therefore, purges are unreliable. Historically, this hasn't been a huge issue. We were at a point of stability for quite a long time, where user-notable missed purges were rare.
  1. Separately from this, we have cache-layer race conditions. In brief: requests often pass through multiple layers of varnish cache in our infrastructure. The multicast HTCP purges have no awareness of this at all. Therefore, it's easy for a race condition to occur where an upper-layer cache gets purged of the item, then immediately gets a new request for the item, and then re-fetches the same outdated content from a deeper-layer cache just before that deeper cache processed the same purge request. Now it's purged in the lower-layer cache, but the old content is still there post-purge in the upper-layer cache. Again, historically this wasn't a huge problem in practice; the race condition was apparently rare in cases that people paid much attention to.
  1. Separately from all the above, we have another purging-reliability problem with content variants. Often a single piece of unique content is cached under several distinct URLs (think content-translation, image resizing, mobile vs desktop rendering, the history page of an updated article, etc). Historically, we've had various issues with completely failing to purge "less-important" content variants.

Recently (let's say going back to late 2015), all of these problems have gotten worse and more-noticeable:

  • T124418 outlines how our rate of purge requests has multiplied by more than an order of magnitude in this recent timeframe. There are several distinct days on which the level raised higher (permanently), and we can only guess at the various causes:
    • We know some of the causes were code changes that attempted to fix the variants problem (4) above by issuing purges for many more distinct URLs per unique content source than we have historically.
    • We know some of the causes were code changes trying to fix problems (2) and (3) by sending multiple delayed repeats of every purge request a short time later to try to paper over races and loss, which further multiplies the total rate.
    • We suspect that when most of the purging was centralized through the JobQueue somewhere in this timeframe, that this probably also multiplied the purge rate due to JobQueue bugs repeating past purges that were already completed for no good reason.
    • Some wikis have actually added javascript at various places in the wikis themselves to execute automatic purge-on-view as a recourse as well, further exacerbating the problem in an incredibly frustrating way.
  • Because of the massive increase in raw purge rate at the caches, we're almost certainly in worse shape than we were before. Various parties' attempts to 'fix' the problems have overwhelmed us with far more purge traffic than we've ever had before, which results in more loss to network queues and buffers at various layers. We now get far more frequent reports of failed purging than we did historically. This image gives a decent view of the purge traffic increase:

We've basically given up on trying to backtrack through whatever has gone wrong in the past several months, since T124418 investigation basically went nowhere. However, we already have longer-term solutions in the works to fix various aspects of the underlying issues anyways, which will hopefully obviate this whole mess:

  • A key component is T122881 where (after upgrading to varnish4, which is still ongoing) we'll get the XKey vmod going to provide a realistic, scalable solution for problem (4) with content variants.
  • Another key component is the EventBus work in T102476, where we hope to centralize purge requests and fan them out to the caches more-reliably without using multicast. We'll probably solve the layer races within EventBus as well by having different subscription topics for different layers and staggering through them, but that's a relatively-minor detail for this ticket.
  • We're also looking in T124954 at reducing our maximum cache TTLs pretty dramatically, which makes any minor purge loss a much smaller fallout than it is with today's long TTLs, but that's stalled out a bit while working on the varnish4 -> xkey backlog for the first point.

The amount of remaining work to get from where we are today to a better solution is non-trivial. It will probably be months before we've significantly reduced or eliminated purging problems, not weeks or days. In the meantime, we don't have a whole lot of awesome ways to cope with this.

If easy administrative tools to simply re-issue purges (e.g. ?action=purge) do not paper over the problem, our only other recourse is having operators execute manual varnish cache bans. These do not scale on a human level (and in fact, detract from ongoing work including all the above), and they also do not scale well enough at a technical level that we want to automate this and make it any easier to execute them faster.

Currently, the majority of the real pragmatic problems this is causing are on upload.wikimedia.org links for Commons deletions, as seen in e.g. T119038, T109331, T133819, and probably several other duplicates of the same basic thing. A lot of the urgency from requestors on these is driven by a rise in Commons abuse from mobile networks to upload copyvio material (especially through labs-based proxy tools), which Commons admins are having to deal with an alarming rate of. At the rate at which they're deleting copyvio content, and the degree to which they care this content isn't visible from our servers anymore, they're falling into a bucket where they are affected by the general purging issues to a much greater and more-noticeable degree than most.

While that's totally our fault (the missed purges), it should be possible to fix individual cases with ?action=purge sorts of solutions. If it's not, then we have a content-variants problem or some other code problem in the midst of all of this as well.

A lot confusion happens in every ticket about this. Browser caching confuses reporters into thinking the item is still cached by us when it's not. Sometimes they're confused by our multiple geographic endpoints (esams, ulsfo, eqiad, codfw). Even within each datacenter, there are multiple frontend caches to which different users will map, getting inconsistent results when there's an issue. I don't have any good answers for this at the moment.

Regardless, caching isn't the only problem in these cases. The underlying problem of massive copyvio uploads on commons should be addressed on its own in some realistic and relatively-future-proof way that's less burdensome to administrators and operators everywhere, IMHO.

Related Objects

BBlack created this task.Apr 28 2016, 12:08 AM
Restricted Application added a project: Operations. · View Herald TranscriptApr 28 2016, 12:08 AM
Restricted Application added a subscriber: Aklapper. · View Herald Transcript
BBlack updated the task description. (Show Details)Apr 28 2016, 12:20 AM
BBlack added subscribers: ema, faidon.
BBlack added a subscriber: mark.

I perhaps should've noted this in the description, but we also attempted one partial general improvement and reverted it. The improvement was to move from the singular multicast address we have today to 2x distinct ones for the upload and text clusters. Since most of the massive increase is on the text cluster, this would reduce the upload cluster to a much more manageable rate.

However, we reverted this because it seemed to make the race issues worse at the time. My guess at that point was that upload's purges being mixed into the queue with the high rate of text purges was somehow making the race condition on upload better rather than worse. We still don't know if that was really the case, and we could try this experiment again easily.

ori added a subscriber: ori.Jun 2 2016, 11:47 PM
ori added a comment.Jun 3 2016, 12:00 AM
In T133821, @BBlack wrote:

Therefore, it's easy for a race condition to occur where an upper-layer cache gets purged of the item, then immediately gets a new request for the item, and then re-fetches the same outdated content from a deeper-layer cache just before that deeper cache processed the same purge request.

Since rMW01c2b0a4255f: Add $wgCdnReboundPurgeDelay for more consistent CDN purges MediaWiki attempts to work around this by issuing a second purge after a short delay. This was documented in a comment in rMWd6ecdc1b36ef: Add more $wgCdnReboundPurgeDelay comments.

@ori - thanks for the links! It's good to know it's only one extra purge, I wasn't even sure of that.

Technically, even if the delay is high enough, 2 purges can be insufficient to stop all race conditions. There are up to 4 cache daemons in the path in the worst case, so it would take 4x such delayed purges (currently) to absolutely ensure the race condition is squashed. I don't think we want to do that at this stage, though. The race-losing cases were rare to begin with, and just having the existing second purge should be "good enough" statistically until we figure out a better long-term plan for the race problem.

ori added a comment.Jun 3 2016, 5:44 AM

However, we reverted this because it seemed to make the race issues worse at the time.

How did you know? Do we have a way of tracking how often we hit the race condition? And do you have any theory as to why cache coherence would improve under load?

However, we reverted this because it seemed to make the race issues worse at the time.

How did you know?

Because the volume of user complaints about disfunctional purging of images increased, and then decreased again when we reverted.

Do we have a way of tracking how often we hit the race condition?

No.

And do you have any theory as to why cache coherence would improve under load?

A better way to put it would be this: the current purge stream is massive in rate, and most of it is text-cluster rather than upload-cluster purges (upload is a small fraction of the total rate). When upload purges are randomly-mixed in the text purge stream, it's going to create some artificial buffer/delay time that wouldn't otherwise be present in a stream of just upload purges. This probably has some effect on the race conditions which is difficult to reason about with much certainty, but the user-complaint data seems to indicate that isolating the smaller upload purge stream into its own queues made the races worse.

Gunnex removed a subscriber: Gunnex.Aug 8 2016, 10:45 AM
ema moved this task from Triage to Caching on the Traffic board.Sep 30 2016, 2:38 PM
Tbayer added a subscriber: Tbayer.Sat, Jul 8, 4:10 AM
Tbayer awarded a token.Sat, Jul 8, 4:14 AM

I've updated over in T124954#3421257 on the TTL reduction work that has happened in recent months. The TL;DR there is that even in the case that no purging works at all and the application-specified cache lifetimes are long, the maximum time an object can persist in our Varnish caching stack should be 4 days at the outside, rather than the multiple weeks that were possible back when this ticket was originally opened. This has been the case since early May. We'll eventually get this down to a hard 1-day limit with further work on this front. If you're seeing files that were deleted more than 4 days before, the issue is almost certainly at a deeper layer than the Varnish caches (e.g. swift, or MW's parsercache).