This is a parent task for the work to be done for the Modern Event Platform Program.
EventLogging is home grown, and was not designed for purposes other than low volume analytics in MySQL databases. However, the ideas it was based on are solid and convergently have become an industry standard, often called a Stream Data Platform. In the last two years, we have been developing the EventBus sub-system with the aim of standardizing events to be used both internally for propagating changes to update the dependent artifacts as well as exposing them to clients. While this has been a success, integrating these events with different systems requires much custom and cumbersome glue code. There exist open source technologies for integrating and processing streams of events.
Engineering teams should be able to quickly develop features that are easy to instrument and measure, as well as for those features to react to events from other systems.
As a way to begin the process of understanding existing challenges with EventLogging, we have created the following document: https://docs.google.com/spreadsheets/d/1M1A4YEdlF0T79KgQO7g4_jpzNSe-XCn3lO0_TzhO6yQ/edit?ts=5ae7bc8a#gid=0. This document is meant to list out all the steps to instrumenting and analyzing with EventLogging, indicate which ones are the most time-consuming and error-prone, identify which teams participate, and be specific about the challenges in each step.
This program also overlaps with the Better Use of Data program. See also https://docs.google.com/spreadsheets/d/16cALJVeql2euSad3GgXJjDCOVYsBRC64ietw8oRzsbI/edit#gid=0.
For some historical context see the slides at Event Infrastructure at WMF (2018).
Each of the components described below are units of technical output of this program. They are either deployed services/tools, or documentation and policy guidelines.
Let's first define a couple of terms before the individual technical components are detailed below.
- Event - A strongly typed and schemaed piece of data, usually representing something happening at a definite time. E.g. revision-create, user-button-click, page-load, etc.
- Stream - A contiguous (often unending) collection of events (loosely) ordered by time.
from internal and external clients (browsers & apps). EventLogging + EventBus do some of this already, but are limited in scope and scale. This is EventGate.
This is comprised of several git repositories, all pulled together and easily accessible over a simple HTTP service / filebrowser. It may eventually also have a nice GUI.
Some exist already for analytics purposes, some exist for mediawiki/event-schemas. We should unify these.
Stream Connectors for ingestion to and from various state stores
(MySQL, Redis, Druid, Cassandra, HDFS, etc.) This will likely be Kafka Connect. We will need to adapt Kafka Connect to work with JSONSchemas and our Event Schema Repository.
Product needs the ability to have more dynamic control over how client side producers of events are configured. This includes things like sampling rate, time based event producing windows etc. (This component was originally conceived of as part of the Event Schema Repository component. It is complex and architecturally different enough to warrant its own component here.)
Stream Processing system with dependency tracking system conceptual design
Engineers should have a standardized way to build and deploy and maintain stream processing type jobs, for both analytics an production purposes. A very common use of stream processing at WMF is change-propagation, which to do well requires a dependency tracking mechanism, a very long term goal. We want to choose stream processing technologies that work toward this goal.
This component is the lowest priority of the Modern Event Platform, and as such will have more thought and planning towards the end of the program.
- T105766: RFC: Dependency graph storage; sketch: adjacency list in DB
- Q4: Interview product and technology stakeholders to collect desires, use cases, and requirements.
- Q1: Survey and choose technologies and solutions with input from Services and Operations.
- Q2: Begin implementation and deployment of some chosen techs.
- Q3: Deployment of eventgate-analytics stream intake service - T206785,
- Q4: Deployment of eventgate-main stream intake service - T218346
- Q4: Decommission Avro streams in favor of eventgate-analytics JSON based ones, T188136
- Q4: (new) CI support for event schemas repo - T206814
Stream Intake Service - T201068
Migrate Mediawiki EventBus events to eventgate-main & deprecate eventlogging-service-eventbus
- Q1: Continue migrating events to eventgate-main - T211248
- Q2: Decomission eventlogging-service-eventbus (Done in Q1)
Event Schema Repositories - T201063
- Q1: Schema repository hooks to generate dereferenced canonical version - T206812
- Q2: Support $ref in JSONSchemas - T206824
- Q2/Q3: Set up public HTTP endpoint for - T233630
- Q2/Q3: Create a new 'primary' and 'secondary' schema repositories.
Q3: Deprecate 'mediawiki' schema repository.(Moved to Q1 2020-2021)
Stream Configuration Service - T205319
- Q1: start planning with Audiences - Design Document
- Q2: implementation prototype - T233634
- Q3: Deployment and use by EventLogging and eventgate-analytics-external - T242122
Replace EventLogging Analytics
This is a long term project to be worked on in collaboration with Audiences engineers which includes work on the Event Schema Repositories and Event Stream Configuration Service components.
- Q1: Begin planning this work with Audiences - Design Document
- Q2: Coding work on all of these pieces (e.g. client side library to use Stream Config and POST to eventgate) - T228175
- Q2-Q4: deployment of Stream Config Service and some usages of eventgate-analytics-external
- Q4: Begin migrating existent EventLogging streams to EventGate - T238230 and T238138
See also: T225237: Better Use of Data
NOTE: 2019-09: This work is stalled due to licensing issues with Confluent's HDFS Connector
- Q1: Kafka Connect development work (Kubernetes? YARN? Standalone?) - T223626
- Q2: Kafka Connect deployment
- Q2-Q4: Replace usages of Camus HDFS with Kafka Connect HDFS - T223628
Stream Processing System & Dependency Tracking
NOTE: 2019-11: This work is stalled due to lack of owner for dependency tracking
Work for next year:
- collect basic requirements
- Figure out if a streaming platform + graph db support basic requirements at scale
(As of 2020-06 these are timeline guesses, not goals.)
- Q1-Q3: Migrate all legacy EventLogging streams to Eventgate (see also)
- Q1: Deprecate 'mediawiki' schema repository
- Q1: Centralize all event stream configuration in mediawiki-config
- Q1: Automate Analytics Event Ingestion jobs using EventStreamConfig - T251609
- Q1: Improve monitoring of Analytics Event Ingestion using canary events - T251609
- JADE for ORES
- Fundraising banner impressions pipeline
- WDQS state updates - T244590: [Epic] Rework the WDQS updater as an event driven application
- Job Queue (implementation ongoing)
- Frontend Cache (varnish) invalidation
- Scalable EventLogging (with automatic visualization in tools (Pivot, etc.))
- Realtime SQL queries and state store updates. Can be used to verify real time that events have what they should/are valid
- Trending pageviews & edits
- Mobile App Events
- ElasticSearch index updates incorporating new revisions & ORES scores
- Automatic Prometheus metric transformation and collection
- Dependency tracking transport and stream processing
- Stream of reference/citation events: https://etherpad.wikimedia.org/p/RefEvents
- Client side error logging rate limiting and de-duping via Stream Processing - T217142
- Stream processing: Filtering exit text stream for specific keywords
- Stream processing: diff stream
- Stream processing: revision token stream, for ORES and for search.
- Stream processing: realtime historical data endpoint T240387: MW REST API Historical Data Endpoint Needs
- Stream processing: DDoS and other traffic anomaly detection:
- Outlier detection
- adaptive rate limiting
- Emitting structured metadata about page edits at save+parse time (links added, images added, wikidata items added, templates used, etc.)
- monitoring and alerting in spikes of referrers (Isaac)
(...add more as collected!)