Page MenuHomePhabricator

Make CDN purges reliable
Closed, ResolvedPublic

Description

This meta-task is to serve as a pointer/blocker so we don't keep having to re-explain the same basic problems in many tickets.

Background

The CDN infrastructure is composed of two layers. Traffic flows from frontend-cache -> backend-cache -> MediaWiki application.

  1. "Frontend" cache. Their main purpose is to handle high load. Traffic is generally distributed equally among them. Implementation-wise this is currently backed by Varnish, stored in RAM, has the logical capacity equal to what one such server can hold in memory (given each frontend server is effectively the same).
  2. "Backend" cache. Their main purpose is to handle wide range of pages. Traffic is distributed by hashing the URL to one of the backends (any relevant request headers factor in as well).

When an article is edited, or when we cascade updates from templates and Wikidata items, we need to purge the relevant URLs from the CDN caches. We use HTCP (multicast UDP) to send the purges from MediaWiki to the cache nodes.

See https://wikitech.wikimedia.org/wiki/MediaWiki_at_WMF#Infrastructure for a more complete overview, including links to in-depth docs.

Root problems
  1. Network congestion. The use of HTCP (multicast UDP) generates a lot of internal traffic to our cache nodes.
  1. Packet loss. UDP is unreliable, especially at high rates multicasted across broad networks and contending with other beefy traffic to the cache boxes on network queues and such. Historically, this hasn't been a huge issue when internal traffic was much more stable. For quite a long time, user-notable missed purges were rare.
  1. Bad renewal of purged content, due to a cache-layer race condition. The multicast HTCP purges have no awareness of our distinct frontend and backend layers.

It is easy for a purge to reache a frontend first (instead of backends). Then upon the next visit to that article the frontend will fallback to the backend cache which may serve it its old copy. Thus it will have "whitewashed" the old version. Sometime later the backend recieves the purge, but the frontends have already moved on and this does not self-correct currently. Again, historically this wasn't a huge problem in practice; the race condition was never noticed much for articles that people paid most attention to.

This problem is non-trivial to solve because there can be a local backlog of purges. Even if we "simply" send the purge to all the backends first, and only then purge the frontends, this does not help per-se because the action isn't instantenous. Each server has its own inbox of purges it has received for processing. What matters is not the order in which they are sent to the cache layers, but the order in which they are processed.

  1. Many URLs to purge (content variants and derivatives). Often a single piece of unique content is reflected under several distinct URLs (think language conversion, image resizing, mobile vs desktop rendering, the History page of an article, etc). Historically, this was solved by either never caching or never purging the "less-important" views of article metadata.
Impact

Since late 2015, the above problems have gotten worse and more-noticeable:

  • T124418 outlines how our rate of purge requests has multiplied by more than an order of magnitude in this recent timeframe. There are several distinct days on which the level raised higher (permanently), and we can only guess at the various causes:
    • We know some of the causes were code changes that attempted to fix the variants problem (4) above by issuing purges for many more distinct URLs per unique content source than we have historically.
    • We know some of the causes were code changes trying to fix problems (2) and (3) by sending multiple delayed repeats of every purge request a short time later to try to paper over races and loss, which further multiplies the total rate.
    • We suspect that when most of the purging was centralized through the JobQueue somewhere in this timeframe, that this probably also multiplied the purge rate due to JobQueue bugs repeating past purges that were already completed for no good reason.
    • Some wikis have actually added javascript at various places in the wikis themselves to execute automatic purge-on-view as a recourse as well, further exacerbating the problem in an incredibly frustrating way.
  • Because of the massive increase in raw purge rate at the caches, we're almost certainly in worse shape than we were before. Various parties' attempts to 'fix' the problems have overwhelmed us with far more purge traffic than we've ever had before, which results in more loss to network queues and buffers at various layers. We now get far more frequent reports of failed purging than we did historically. This image gives a decent view of the purge traffic increase:

Screen Shot 2016-04-07 at 7.47.28 PM.png (725×1 px, 236 KB)

What we're doing

We've basically given up on trying to backtrack through whatever has gone wrong in the past several months, since T124418 investigation basically went nowhere. However, we already have longer-term solutions in the works to fix various aspects of the underlying issues anyways, which will hopefully obviate this whole mess:

  • Enable XKey support in Varnish (Aug 2016). A key component is T122881 where (after upgrading to varnish4, which is still ongoing) we'll get the XKey vmod going to provide a realistic, scalable solution for problem (4) with content variants.
  • Deploy EventBus/Kafka support to MediaWiki (2015-2016). Another key component is the EventBus work (T116786), where we hope to centralize purge requests and fan them out to the caches more-reliably without using multicast. We'll probably solve the layer races within EventBus as well by having different subscription topics for different layers and staggering through them, but that's a relatively-minor detail for this ticket.
  • Shorten CDN expiry to reduce need for purging (2016). We're also looking in T124954 at reducing our maximum cache TTLs pretty dramatically, which makes any minor purge loss a much smaller fallout than it is with today's long TTLs, but that's stalled out a bit while working on the varnish4 -> xkey backlog for the first point.
  • Introduce MediaWiki rebound purging (2015). To reduce chances of problem (3) happening with race conditions, we added a stop-gap that effectively rolls the dice twice. The purge is repeated once, 20 seconds later, via the job queue. Configured in MediaWiki via $wgCdnReboundPurgeDelay.
  • Introduce chained purging in vhtcpd (2017). Within a single server (which hosts both a frontend instance and a backend instance) chain the purge processing so that backend is applied before frontend. This reduces chances of problem (3) happening, but does not rule it out because there is no coordination between backends. See also https://gerrit.wikimedia.org/r/382868/ (91cda076 https://wikitech.wikimedia.org/wiki/Multicast_HTCP_purging.
Future thoughts

The amount of remaining work to get from where we are today to a better solution is non-trivial. It will probably be months before we've significantly reduced or eliminated purging problems, not weeks or days. In the meantime, we don't have a whole lot of awesome ways to cope with this.

If easy administrative tools to simply re-issue purges (e.g. ?action=purge) do not paper over the problem, our only other recourse is having operators execute manual varnish cache bans. These do not scale on a human level (and in fact, detract from ongoing work including all the above), and they also do not scale well enough at a technical level that we want to automate this and make it any easier to execute them faster.

Currently, the majority of the real pragmatic problems this is causing are on upload.wikimedia.org links for Commons deletions, as seen in e.g. T119038, T109331, T133819, and probably several other duplicates of the same basic thing. A lot of the urgency from requestors on these is driven by a rise in Commons abuse from mobile networks to upload copyvio material (especially through labs-based proxy tools), which Commons admins are having to deal with an alarming rate of. At the rate at which they're deleting copyvio content, and the degree to which they care this content isn't visible from our servers anymore, they're falling into a bucket where they are affected by the general purging issues to a much greater and more-noticeable degree than most.

While that's totally our fault (the missed purges), it should be possible to fix individual cases with ?action=purge sorts of solutions. If it's not, then we have a content-variants problem or some other code problem in the midst of all of this as well.

A lot confusion happens in every ticket about this. Browser caching confuses reporters into thinking the item is still cached by us when it's not. Sometimes they're confused by our multiple geographic endpoints (esams, ulsfo, eqiad, codfw). Even within each datacenter, there are multiple frontend caches to which different users will map, getting inconsistent results when there's an issue. I don't have any good answers for this at the moment.

Regardless, caching isn't the only problem in these cases. The underlying problem of massive copyvio uploads on commons should be addressed on its own in some realistic and relatively-future-proof way that's less burdensome to administrators and operators everywhere, IMHO.

Related Objects

StatusSubtypeAssignedTask
Resolvedaaron
Resolvedaaron
ResolvedOttomata
DuplicateNone
DuplicateNone
DuplicateBBlack
Resolved ema
Resolvedaaron
Resolved ema
Resolved ema
Resolved ema
Resolved ema
Resolved ema
Resolved ema
Resolved ema
Resolved ema
ResolvedBBlack
Resolved ema
Resolved ema
Resolved ema
ResolvedBBlack
Resolved ema
Resolveddaniel
Resolved GWicke
ResolvedOttomata
InvalidOttomata
ResolvedOttomata
ResolvedOttomata
ResolvedRobH
ResolvedOttomata
Resolved Cmjohnson
Resolvedelukey
ResolvedRobH
Resolved mobrovac
ResolvedEevans
Declined csteipp
Resolved csteipp
Resolved GWicke
Resolvedssastry
Resolved Pchelolo
ResolvedOttomata
ResolvedOttomata
ResolvedOttomata
ResolvedOttomata
Resolved madhuvishy
ResolvedOttomata
Resolved madhuvishy
ResolvedOttomata
Resolved mobrovac
Resolved mobrovac
ResolvedBBlack
Resolved ema
OpenNone
ResolvedCDanis

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

Change 385415 had a related patch set uploaded (by BBlack; owner: BBlack):
[operations/puppet@production] htcppurger: per-dc/cluster delay data

https://gerrit.wikimedia.org/r/385415

Change 385415 merged by BBlack:
[operations/puppet@production] htcppurger: per-dc/cluster delay data

https://gerrit.wikimedia.org/r/385415

Mentioned in SAL (#wikimedia-operations) [2017-10-20T18:52:25Z] <bblack> vhtcpd upgrade + queue delay puppetization deploy ( https://gerrit.wikimedia.org/r/385415 ) done on cp* fleet - T133821

This continues to be a pain point for WP0 abuse, and probably a major accident waiting to happen in general (imagine failing to honor DMCA takedown time limits, or some kind of attack material remaining available for days). Are there further steps planned to investigate/resolve the issue?

What I really need to dig on this further is an easy way to see a list of recent WP0-abuse-related deletions on various wikis. Am I missing some way to use the deletion log search interfaces?

The user-side of deletion logs does not inherently have a search function, unless the specific actions are marked with a tag.

Err, we should really move the sub-conversation back to T171881 . This ticket is more about general reliability problems and/or race-conditions, not about the WP0 abuse specifically.

mobrovac closed subtask Restricted Task as Resolved.Feb 20 2019, 11:58 PM
Bawolff reopened subtask Restricted Task as Open.Feb 21 2019, 4:30 PM
mobrovac closed subtask Restricted Task as Resolved.Mar 12 2019, 2:37 PM
Krinkle renamed this task from Content purges are unreliable to Make CDN purges reliable.Apr 6 2020, 5:34 PM
Krinkle updated the task description. (Show Details)

Change 586390 had a related patch set uploaded (by Krinkle; owner: CDanis):
[operations/mediawiki-config@master] reverse-proxy: Disable rebound purges

https://gerrit.wikimedia.org/r/586390

Change 586390 abandoned by CDanis:
reverse-proxy: Disable rebound purges

https://gerrit.wikimedia.org/r/586390

Change 592615 had a related patch set uploaded (by Ema; owner: Ema):
[operations/puppet@production] ATS: stop logging PURGE traffic

https://gerrit.wikimedia.org/r/592615

Change 592615 merged by Ema:
[operations/puppet@production] ATS: stop logging PURGE traffic

https://gerrit.wikimedia.org/r/592615

Since purged is now in production, and that we have some work ongoing that will reduce the amount of purges we send (T250261), I think it's time to revisit the idea of moving purges to Kafka. This would also help with the transition of change-prop to kubernetes.

In order to make purges completely reliable we would need a quite complex setup, but I think we can reap the benefits progressively.

One first step would be to just substitute UDP multicast with kafka as a transport for purges. This would give us a series of advantages:

  • reliability of transmission
  • backlog and the ability to restart from a specific offset even upon a purged service restart
  • Ability to define prioritized queues for e.g. direct edits

In this model, we would need to do the following things:

  • Define a schema for a "url purge message".
  • Add the ability to read such messages from multiple kafka topics from purged. In this model, every purged server will be its own consumer group and will read all messages from the topics. Purged should either listen to multicast or consume kafka, not both.
  • Add a new method to HtmlCacheUpdate to submit messages to eventgate using the schema mentioned above. We should have the ability to pick a topic, depending on the priority of the purge.
  • Start sending purges on both channels from the jobrunners, and progressively switch purged to consume from kafka.

Here is a diagram representing the purge request flow for a job:

purges-with-kafka.png (605×455 px, 30 KB)

At a later time, we could think of changing the logic, and make purges avoid race conditions, removing the need for the rebound purges.
One way to implement this would be the following:

  • No more changes are needed at the application layer
  • All purged servers join a single consumer group per datacenter. This will ensure each purge message is consumed by only one purged instance.
  • This instance will take care of sending the purges to all the cache backends in the DC first, and to all the frontends afterwards

This would ensure there are no fe/be race conditions.

  • Define a schema for a "url purge message".

If I can throw in another $0.02 here - I would scope this bigger than a URL, and think of it as a schema for purging broader things as well. "Purge a URL" is one kind of purge we have today, and will probably always need as a baseline capability, but we've always wanted to gain the ability to purge on a more semantic sort of level, as with the earlier (never really completed, and now everything has changed) X-Key work. The idea here is the ability to purge alternate K:V sets that can be used to tag small sets (not large swaths, it only scales to small-ish sets well!) of related content. For example, a purge might have a key of type article, and a value like enwiki:Foo, which would purge all of the potentially-many outputs related to enwiki's Foo article (history, various content snippet outputs from APIs, etc), which we'd control by having all the related content outputs contain a special header like X-Key: article=enwiki:foo, and having the caches build alternate lookup indices on these keys to efficiently purge content on them.

I volunteer RESTBase stack to be a guinea pig for this. Currently we already are processing all the purges via kafka. We post a resource_change event into several kafka topics (one for purges derived from direct page editing and one for purges derived from transclusions) for every URI we need to purge. We have a with a change-prop rule in the very end, that translate from kafka messages to HTCP messages and send them over UDP.

So, for RESTBase stack moving to kafka purging will be trivial. Once we have seen the solution working, we can move on to MediaWiki.

  • Define a schema for a "url purge message".

If I can throw in another $0.02 here - I would scope this bigger than a URL, and think of it as a schema for purging broader things as well. "Purge a URL" is one kind of purge we have today, and will probably always need as a baseline capability, but we've always wanted to gain the ability to purge on a more semantic sort of level, as with the earlier (never really completed, and now everything has changed) X-Key work. The idea here is the ability to purge alternate K:V sets that can be used to tag small sets (not large swaths, it only scales to small-ish sets well!) of related content. For example, a purge might have a key of type article, and a value like enwiki:Foo, which would purge all of the potentially-many outputs related to enwiki's Foo article (history, various content snippet outputs from APIs, etc), which we'd control by having all the related content outputs contain a special header like X-Key: article=enwiki:foo, and having the caches build alternate lookup indices on these keys to efficiently purge content on them.

Ok, this means we need to create our own schema for this which we will be able to extend as much as we want in the future. I'll work on that first.

Looking at our existing event schemas, resource_change has all the information we need, but also much more. We would like to get a much smaller object to transmit, and specifically we only want to define:

  1. uri: the url to purge
  2. root_event_ts: timestamp of the root event causing the purge
  3. tags: [optional] - a set of optional tags to attach to the event.

in fact, the minimal valid message using resource_change as a schema would look like this:

{
    "$schema": "/resource_change/1.0.0",
    "meta": {
        "id": "aaaaaaaa-bbbb-bbbb-bbbb-123456789012",
        "dt": "2020-04-30T11:37:53+02:00",
        "stream": "purge",
        "uri": "https://it.wikipedia.org/wiki/Francesco_Totti"
    },
    "root_event": {
        "dt": "2020-04-24T09:00:00+02:00",
        "signature": ""
    }
}

while we could imagine creating something like:

{
    "$schema": "/resource_purge/1.0.0",
    "uri": "https://it.wikipedia.org/wiki/Francesco_Totti",
    "root_event_dt": "2020-04-24T09:00:00+02:00",
}

which is a 56% reduction in size. I consider that pretty significant, but maybe in kafka's context that doesn't really matter and we prefer standardizing on less schemas. I'll generate a new schema anyways, and we can discuss the merits in a review.

Change 593487 had a related patch set uploaded (by Giuseppe Lavagetto; owner: Giuseppe Lavagetto):
[mediawiki/event-schemas@master] Add schema for purge events.

https://gerrit.wikimedia.org/r/593487

I like the root event timestamp info. We could potentially put in future rules to help by ignoring ancient purges, in some cases (e.g. if we can guarantee the cache's contents are no older then 24h, we can also ignore root events older than 24h, which might speed up replaying a backlog of data...).

Change 593487 abandoned by Giuseppe Lavagetto:
Add schema for purge events.

https://gerrit.wikimedia.org/r/593487

After a discussion on the patch, it was clearer to me that some information can't be removed from the message, and that makes resource_change the perfect fit for our use-case.

Change 594147 had a related patch set uploaded (by Giuseppe Lavagetto; owner: Giuseppe Lavagetto):
[operations/software/purged@master] Add the ability to consume from kafka

https://gerrit.wikimedia.org/r/594147

Change 594148 had a related patch set uploaded (by Giuseppe Lavagetto; owner: Giuseppe Lavagetto):
[operations/software/purged@master] Add integration tests using docker-compose

https://gerrit.wikimedia.org/r/594148

At a later time, we could think of changing the logic, and make purges avoid race conditions, removing the need for the rebound purges.
One way to implement this would be the following:

  • No more changes are needed at the application layer
  • All purged servers join a single consumer group per datacenter. This will ensure each purge message is consumed by only one purged instance.
  • This instance will take care of sending the purges to all the cache backends in the DC first, and to all the frontends afterwards

This would ensure there are no fe/be race conditions.

I guess we have two categories of rebound purges. The MediaWiki ones (for DB replication lag) and the infrastructure ones for cache tier race mitigation. The proposed scheme would eliminate the later. The former case is largely negligible (only applies to URLs of pages directly edited, not changed via templates/files).

Change 595502 had a related patch set uploaded (by Giuseppe Lavagetto; owner: Giuseppe Lavagetto):
[operations/puppet@production] purged: add support for kafka

https://gerrit.wikimedia.org/r/595502

At a later time, we could think of changing the logic, and make purges avoid race conditions, removing the need for the rebound purges.
One way to implement this would be the following:

  • No more changes are needed at the application layer
  • All purged servers join a single consumer group per datacenter. This will ensure each purge message is consumed by only one purged instance.
  • This instance will take care of sending the purges to all the cache backends in the DC first, and to all the frontends afterwards

This would ensure there are no fe/be race conditions.

I guess we have two categories of rebound purges. The MediaWiki ones (for DB replication lag) and the infrastructure ones for cache tier race mitigation. The proposed scheme would eliminate the later. The former case is largely negligible (only applies to URLs of pages directly edited, not changed via templates/files).

I think we can basically stop sending rebound purges at this point for anything but direct edits, given how more reliable purged in keeping the queue of outdated purges down.

Change 595905 had a related patch set uploaded (by Giuseppe Lavagetto; owner: Giuseppe Lavagetto):
[operations/puppet@production] cache::text: enable reading purges from kafka on cp3050

https://gerrit.wikimedia.org/r/595905

Change 594147 merged by Giuseppe Lavagetto:
[operations/software/purged@master] Add the ability to consume from kafka

https://gerrit.wikimedia.org/r/594147

Change 594148 merged by Giuseppe Lavagetto:
[operations/software/purged@master] Add integration tests using docker-compose

https://gerrit.wikimedia.org/r/594148

Mentioned in SAL (#wikimedia-operations) [2020-05-13T09:21:37Z] <_joe_> installing purged 0.11 on cp2028 T133821

Mentioned in SAL (#wikimedia-operations) [2020-05-13T09:32:51Z] <_joe_> installing purged 0.11 on cp2027 T133821

Mentioned in SAL (#wikimedia-operations) [2020-05-13T14:54:58Z] <_joe_> upgrading + restarting purged across ulsfo and codfw T133821

Change 595502 merged by Giuseppe Lavagetto:
[operations/puppet@production] purged: add support for kafka

https://gerrit.wikimedia.org/r/595502

Change 595905 merged by Giuseppe Lavagetto:
[operations/puppet@production] cache::text: enable reading purges from kafka on cp2027

https://gerrit.wikimedia.org/r/595905

Change 596651 had a related patch set uploaded (by Giuseppe Lavagetto; owner: Giuseppe Lavagetto):
[operations/puppet@production] cache::text: enable consuming from kafka everywhere

https://gerrit.wikimedia.org/r/596651

Change 597051 had a related patch set uploaded (by Giuseppe Lavagetto; owner: Giuseppe Lavagetto):
[operations/puppet@production] purged: enable consuming from kafka on cp2029 too

https://gerrit.wikimedia.org/r/597051

Change 597051 merged by Giuseppe Lavagetto:
[operations/puppet@production] purged: enable consuming from kafka on cp2029 too

https://gerrit.wikimedia.org/r/597051

Change 596651 merged by Giuseppe Lavagetto:
[operations/puppet@production] cache::text: enable consuming from kafka everywhere

https://gerrit.wikimedia.org/r/596651

Mentioned in SAL (#wikimedia-operations) [2020-05-18T14:19:27Z] <_joe_> start consuming $dc.resource-purge kafka topic from purged in all of codfw T133821

Mentioned in SAL (#wikimedia-operations) [2020-05-18T14:23:25Z] <_joe_> start consuming $dc.resource-purge kafka topic from purged in all of eqsin, ulsfo T133821

Mentioned in SAL (#wikimedia-operations) [2020-05-18T14:29:03Z] <_joe_> start consuming $dc.resource-purge kafka topic from purged in all of eqiad T133821

Mentioned in SAL (#wikimedia-operations) [2020-05-18T14:33:03Z] <_joe_> start consuming $dc.resource-purge kafka topic from purged in all of esams T133821

Status update: purged is now consuming purges from restbase directly via kafka and not via multicast anymore. This should unblock the complete migration of changeprop to kubernetes, amongst other things.

Change 604430 had a related patch set uploaded (by Ema; owner: Ema):
[operations/puppet@production] cache: make upload consume purges from kafka

https://gerrit.wikimedia.org/r/604430

Change 604430 merged by Ema:
[operations/puppet@production] cache: make upload consume purges from kafka

https://gerrit.wikimedia.org/r/604430

Change 604743 had a related patch set uploaded (by Ema; owner: Ema):
[operations/puppet@production] purged: make Kafka cluster name configurable

https://gerrit.wikimedia.org/r/604743

Change 604743 merged by Ema:
[operations/puppet@production] purged: make Kafka cluster name configurable

https://gerrit.wikimedia.org/r/604743

BBlack assigned this task to ema.

This should've been closed back when T250781 closed - all purge traffic now goes via kafka queues and multicast purging is no more. We might have more to do on rate reduction separately in T250205 , but I don't think that needs to hold this ancient, epic, somewhat ambiguous task open.