EventLogging is home grown, and was not designed for purposes other than low volume analytics in MySQL databases. However, the ideas it was based on are solid and convergently have become an industry standard, often called a Stream Data Platform. In the last two years, we have been developing the EventBus sub-system with the aim of standardizing events to be used both internally for propagating changes to update the dependent artifacts as well as exposing them to clients. While this has been a success, integrating these events with different systems requires much custom and cumbersome glue code. There exist open source technologies for integrating and processing streams of events.
Engineering teams should be able to quickly develop features that are easy to instrument and measure, as well as for those features to react to events from other systems.
As a way to begin the process of understanding existing challenges with EventLogging, we have created the following document: https://docs.google.com/spreadsheets/d/1M1A4YEdlF0T79KgQO7g4_jpzNSe-XCn3lO0_TzhO6yQ/edit?ts=5ae7bc8a#gid=0. This document is meant to list out all the steps to instrumenting and analyzing with EventLogging, indicate which ones are the most time-consuming and error-prone, identify which teams participate, and be specific about the challenges in each step.
This program also overlaps with the Better Use of Data program. See also https://docs.google.com/spreadsheets/d/16cALJVeql2euSad3GgXJjDCOVYsBRC64ietw8oRzsbI/edit#gid=0
Each of the components described below are units of technical output of this program. They are either deployed services/tools, or documentation and policy guidelines.
Let's first define a couple of terms before the individual technical components are detailed below.
- Event - A strongly typed and schemaed piece of data, usually representing something happening at a definite time. E.g. revision-create, user-button-click, page-load, etc.
- Stream - A contiguous (often unending) collection of events (loosely) ordered by time.
from internal and external clients (browsers & apps). EventLogging + EventBus do some of this already, but are limited in scope and scale. We will either refactor EventLogging, or use something new.
This should combine the existing EventLogging schemas with the mediawiki/event-schemas. This might be something new, or it might be improvements to the existant Mediawiki based schema repository.
Event Schema Guidelines
Some exist already for analytics purposes, some exist for mediawiki/event-schemas. We should unify these.
Stream Connectors for ingestion to and from various state stores
(MySQL, Redis, Druid, Cassandra, HDFS, etc.) This will likely be Kafka Connect. We will need to adapt Kafka Connect to work with JSONSchemas and our Event Schema Repository.
Product needs the ability to have more dynamic control over how client side producers of events are configured. This includes things like sampling rate, time based event producing windows etc. (This component was originally conceived of as part of the Event Schema Repository component. It is complex and architecturally different enough to warrant its own component here.)
Stream Processing system with dependency tracking system conceptual design
Engineers should have a standardized way to build and deploy and maintain stream processing type jobs, for both analytics an production purposes. A very common use of stream processing at WMF is change-propogation, which to do well requires a dependency tracking mechanism, a very long term goal. We want to choose stream processing technologies that work toward this goal.
This component is the lowest priority of the Modern Event Platform, and as such will have more thought and planning towards the end of the program.
- T105766: RFC: Dependency graph storage; sketch: adjacency list in DB
NOTE: This timeline is very WIP, and is only a guess of when progress will be made on different components.
- Q4: Interview product and technology stakeholders to collect desires, use cases, and requirements.
- Q1: Survey and choose technologies and solutions with input from Services and Operations.
- Q2: Begin implementation and deployment of some chosen techs.
- Q3: Deployment of Stream Intake service for production use.
- Q4: Deployment of Stream Ingestion service for Hadoop intake (Kafka Connect with JSONSChema support).
- Q4: Deployment of Stream Intake Service for analytics use.
- Development of Stream Config Service and GUI.
- Stream processing development and deployment?
Use case collection
- JADE for ORES
- Fundraising banner impressions pipeline
- WDQS state updates
- Job Queue (implementation ongoing)
- Frontend Cache (varnish) invalidation
- Scalable EventLogging (with automatic visualization in tools (Pivot, etc.))
- Realtime SQL queries and state store updates. Can be used to verify real time that events have what they should/are valid
- Trending pageviews & edits
- Mobile App Events
- ElasticSearch index updates incorporating new revisions & ORES scores
- Automatic Prometheus metric transformation and collection
- Dependency tracking transport and stream processing
- Stream of reference/citation events: https://etherpad.wikimedia.org/p/RefEvents
(...add more as collected!)