At a high level, change propagation involves three main ingredients:
1) Distributing change events, and
2) processing and propagating those change events using
3) a dependency graph.
## Publish / subscribe event bus
Many of our internal services are interested in following a common set of events. The most popular events are perhaps [those related to edits](https://meta.wikimedia.org/wiki/Research:MediaWiki_events:_a_generalized_public_event_datasource), which are followed to keep derived content and caches up to date. Our current mechanisms for event propagation aren't very reliable, lead to a lot of duplicated effort, and aren't usable for services other than MediaWiki core.
We intend to set up a more reliable and scaleable event bus along with a standard set of update events. This event bus will support a reliable publish/subscribe consumer pattern, which means that several clients can reliably follow the same event, thereby decoupling producers from consumers. This should be architected so that that one is able to use this on any Mediawiki installation on the public Internet (think instant Wikidata or instant Commons). Some of these clients are only interested in a small subset of data some will want all of them. See T84923 for more background on the event bus.
## Dependency tracking
In our applications, dependencies normally form a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph). We currently have specific link tables tracking some kinds of dependencies (image links, template links etc), but nothing that can track tree-shaped relationships in an extensible manner. The current mechanisms are further limited to a single project, which means that they can't be directly used by wikidata and other shared projects.
A big challenge with such a dependency graph is its maintenance. Dependencies are added and removed all the time, and this needs to be reliably reflected in the dependency graph. Ideally the maintenance of dependency information should be automated to avoid the need to write custom update logic in each service.
### Addressing of components
To allow addressing, each node in the dependency graph needs a unique identifier. By using deterministic identifiers based on the description of the item, we can avoid duplicate work. It would also be desirable if those identifiers could be used directly to dereference the dependency, ideally in a way that support loose coupling of systems across projects. URLs or more generally HTTP requests can satisfy these requirements. There are length limitations for GET requests (2083 bytes in IE), but those can likely be worked around with request storage (GET with a hash of the request) and a POST fall-back for dynamic requests from clients like VisualEditor.
For fine-grained template updates or subscriptions, it would also be useful if we could identify fragments of a resource in a standard manner. In a URL, this could potentially be encoded as a query string or fragment identifier. It is important that we make this mechanism uniform and deterministic.
## Change propagation
Change propagation can be broadly implemented using two techniques, push vs. pull. A push-based change propagation service listens to specific event streams, and then figures out which resources should be updated by consulting the dependency graph and event properties. The update of those resources triggers additional change events, which can then recursively trigger additional updates. Push is generally preferred if there are many reads of each dependent bit of content, or where lowest possible read latency is required. This is the strategy we currently pursue for template updates.
In poll-based change propagation, dependencies are simply checked on each access. This can be implemented by rendering everything from scratch on each access or with a slightly more efficient freshness check.Pull is preferred if low propagation latencies need to be supported with high fan-out, and if there are few reads per change.
Studies [have shown](http://research.yahoo.com/files/sigmod278-silberstein.pdf) that an adaptive combination of push & pull is optimal if the distribution of number of dependencies and updates is skew. Template updates are skew in the number of uses (with some templates used in >7 million pages), but currently less so in the number of edits. Our current approach of re-rendering all seven million articles can easily result in large backlogs of template updates. It might be useful to consider pull based or hybrid solutions (where only a timestamp is propagated and polled) as an alternative to pure push.
### Implementation sketch
After an event, lets say a new revision of a page was saved, an event message is enqueued into a topic of the distributed queue. The message contains the identifier of the event source, the kind of event and various event-specific metadata. On the other end of the distributed queue, several clients are reading messages off the queue. Each independent client (group in Kafka's case) maintains its own offset (or offsets), which lets multiple consumers react to the same event. When receiving a message, each of these consumers performs a client-specific action.
When a change propagation worker receives a message, it will look up a chunk of dependencies in the dependency graph storage. For each dependency, it will call the provided URL (or request template), passing along information from the original event. For each of these dependent updates, this will trigger another update event, recursively propagating the change through the system. When the number of dependencies is large, it will enqueue a follow-up event to trigger the processing of the next page of dependencies later. Once the chunk is fully processed, the worker commits its offset and requests the next message. Should any dependency update fail, it will enqueue a retry event in a separate topic to make sure that the update is retried a few times. Persistently failing jobs will be retired to a 'dead letter' queue for later inspection.
## Current status
- {T84923} is deployed, and provides event streams for edits & resource changes. Under the hood, all topics are prefixed by source datacenter & replicated (see T127718), which lets us cleanly move event processing between datacenters.
- {T117933} is gradually being rolled out at the moment. Driven by a [declarative config file](https://github.com/wikimedia/change-propagation/blob/master/config.example.wikimedia.yaml), this service subscribes to EventBus topics, and processes events by making HTTP requests to other services, or by sending purges to Varnishes. Events are consumed from specific topics, and can be further filtered by arbitrary properties, including URL patterns.
- An [example module](https://github.com/wikimedia/change-propagation/blob/master/sys/backlinks.js) for iterative backlink processing was already created. This module nicely separates the expansion of dependencies from their processing, and can serve as a model for further iterative dependency expansion.
- {T126687} introduced a single topic recording URL-based resource changes. This topic is intended to be used for CDN purges, and is already used to trigger secondary updates in the ChangeProp service. @smalyshev and @aaron are looking into sending all MediaWiki CDN purge requests to this topic.
### Next steps and open questions
- **ChangeProp service expansion**: ChangeProp will gradually expand to cover more use cases. Initially the services team will focus on RESTBase's use cases (including red link, template & media re-renders), and will also move CDN purging from RESTBase to ChangeProp.
- **Reliable CDN purging**: There have been various discussions about making CDN purging more reliable. Current ideas include running Kafka clients on each Varnish node, which would effectively replace the best-effort multicast setup.
- **Reliable RCStream**: @Ottomata has been looking into leveraging Kafka events in RCStream. This can potentially let clients catch up after being disconnected.
- **Cross-project dependency tracking & change propagation**: We currently don't have any general way to track dependencies across projects. Special-case mechanisms were developed for commons and to some degree Wikidata, but other applications (like {T91162}) will need dependency tracking abilities as well. {T105766} discusses some options for storing such dependencies in a general manner, but it's early days & we should probably make our requirements more precise before diving too deeply into the concrete design.
## See also
- {T84923}: Reliable event distribution with publish / subscribe queues
- {T105766}
- {T117933}
- {T126687}
- {T105845} is partly about clearly documenting dependencies for each piece of rendered content
- {T88459}
- [Feeding Frenzy: Selectively Materializing Users’ Event Feeds (Silberstein et al, SIGMOD 10)](http://research.yahoo.com/files/sigmod278-silberstein.pdf): Explores trade-off between push & pull based propagation
- [Twitter architecture summary, 2013](http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html)
- [Google Percolator (Peng & Dabek, USENIX 10)](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Peng.pdf): Incremental processing / change propagation system with transactional updates, built on BigTable
- {T48525}