This task will track the migration of EventLogging schemas & stream so to Event Platform schemas.
Tracking and planning of what schemas to migrate is being done in the EventLogging Schema Migration Audit spreadsheet.
Explanation of what this means for legacy EventLogging schema owners:
https://wikitech.wikimedia.org/wiki/Event_Platform/EventLogging_legacy
We will do our best to migrate schemas in groups associated with teams.
Schemas produced by EventLogging extension
- Community Tech
- Anti-harassment
- T268517: Migrate Anti-Harassment EventLogging schemas to Event Platform (SpecialInvestigate, SpecialMuteSubmit)
- Growth
- T267333: Migrate Growth EventLogging schemas to Event Platform (HelpPanel, HomepageModule, HomepageVisit, NewcomerTask)
- Product Infrastructure
- Structured Data
- Web
- Editing
- Language
- ContentTranslationAbuseFilter
- T267352: UniversalLanguageSelector Event Platform Migration
- Performance
- WMDE Wikidata Camp
- WMDE Technical Wishes
- T275005: CodeMirrorUsage Event Platform Migration
- T275007: ReferencePreviewsBaseline Event Platform Migration
- T275008: ReferencePreviewsCite Event Platform Migration
- T275009: ReferencePreviewsPopups Event Platform Migration
- T275011: TemplateDataApi Event Platform Migration
- T275012: TemplateDataEditor Event Platform Migration
- T275013: TwoColConflictConflict Event Platform Migration
- T275014: TwoColConflictExit Event Platform Migration
- T275015: VisualEditorTemplateDialogUse Event Platform Migration
- WMDEBanner*
- FR Tech
Schemas produced by other software
- Inuka
- T273219: KaiOS / Inuka Event Platform client - must be done before schema migrations can happen
- T267344: InukaPageView Event Platform Migration
- T267345: KaiOSAppFeedback Event Platform Migration
- T267346: KaiOSAppFirstRun Event Platform Migration
- Research
- Reader's Web
- Whatever is left in T282131: Determine which remaining legacy EventLogging schemas need to be migrated or decommissioned
Migration plan for a schema:
1. Pick a schema to migrate
Schemas to migrate are listed in EventLogging Schema Migration Audit spreadsheet.
2. Create a new task to track this schema's migration
# This should work on MacOS to open a new Phab Task form in a browser with some fields already field out. function new_el_migration_phab_task() { schema_name="$1" open "https://phabricator.wikimedia.org/maniphest/task/edit/form/1/?title=$schema_name Event Platform Migration&description=See: https://wikitech.wikimedia.org/wiki/Event_Platform/EventLogging_legacy Unless otherwise notified, client IP and consequently geocoded data will no longer be collected for this event data after this migration. Please let us know if this should continue to be captured. See also T262626.& &parent=259163&tags=Event-Platform&subscribers=Ottomata,Mforns" } new_el_migration_phab_task SearchSatisfaction
Paste this checklist into the new task. This checklist is a summary of the instructions described here.
- 1. Pick a schema to migrate
- 2. Create a new task to track this schema's migration
- 3. Create /analytics/legacy/ schema
- 4. Edit-protect the metawiki Schema page at https://meta.wikimedia.org/wiki/Schema:<SchemaName>
- 5. Manually evolve the Hive table to use new schema
- 6. Add entry to wgEventStreams, wgEventLoggingStreamNames and wgEventLoggingSchemas in operations/mediwiki-config
- 7. Once the legacy stream's data is fully produced through EventGate, switch to using Refine job that uses schema repo instead of meta.wm.org
- 8. Edit the producer extension.json and set EventLoggingSchemas to the new schema URI
- 9. Once the producer extension.json is fully deployed, edit wgEventLoggingSchemas in operations/mediawiki-config InitialiseSettings.php and remove the schema's entry.
- 10. Mark the schema as migrated in the EventLogging Schema Migration Audit spreadsheet
Link this task in the EventLogging Schema Migration Audit spreadsheet.
On the task, contact the owner of the schema and ask if they need client IP and/or geocoded data in the Hive table.
3. Create /analytics/legacy/<schemaname>/current.yaml schema
Using eventlogging_legacy_schema_convert script) in the schemas/event/secondary repository:
old_schema_name=SearchSatisfaction new_schema_name=$(echo $old_schema_name | tr '[:upper:]' '[:lower:]') mkdir ./jsonschema/analytics/legacy/$new_schema_name node ./scripts/eventlogging_legacy_schema_convert.js $old_schema_name > ./jsonschema/analytics/legacy/$new_schema_name/current.yaml
You'll need to edit at least the JSONSchema examples in current.yaml. Easiest thing to do is get an event out of Kafka and use that as as staring point.
# Get the last event out of Kafka kafkacat -C -b kafka-jumbo1001.eqiad.wmnet -o -1 -c 1 -t eventlogging_SearchSatisfaction
If the schema owner indicated that they need client IP and/or geocoded data in Hive, you'll need to add a $ref to the fragment/http/client_ip schema. Example here.
(!) Please, make sure that npm test is not thowing any errors other than snake_case inconsistencies (those are allowed for legacy schemas).
To do that, you need to first comment out L21 in test/jsonschema/repository.test.js, and then run npm test for your new schema to be checked.
When done, remember to revert test/jsonschema/repository.test.js to its original state before commiting!
4. Edit-protect the metawiki Schema page at https://meta.wikimedia.org/wiki/Schema:$old_schema_name
Use this as the edit-protect log message:
This schema has been moved to https://schema.wikimedia.org/#!//secondary/jsonschema/analytics/legacy. See also https://wikitech.wikimedia.org/wiki/Event_Platform/EventLogging_legacy
5. Manually evolve the Hive table to use new schema
Once the above schema is merged (you might have to wait 10 minutes after merge for Spark to be able to retrieve it):
old_schema_name=SearchSatisfaction new_schema_name=$(echo $old_schema_name | tr '[:upper:]' '[:lower:]') table="event.${new_schema_name}" schema_uri="/analytics/legacy/${new_schema_name}/latest" # First run in dry-run mode (the default) to see what EvolveHiveTable will do. spark2-submit --conf spark.driver.extraClassPath=/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-common.jar:/srv/deployment/analytics/refinery/artifacts/hive-jdbc-1.1.0-cdh5.10.0.jar:/srv/deployment/analytics/refinery/artifacts/hive-service-1.1.0-cdh5.10.0.jar --class org.wikimedia.analytics.refinery.job.refine.tool.EvolveHiveTable /srv/deployment/analytics/refinery/artifacts/refinery-job-shaded.jar --table="${table}" --schema_uri="${schema_uri}" # If that looks good, evolve the table: spark2-submit --conf spark.driver.extraClassPath=/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-common.jar:/srv/deployment/analytics/refinery/artifacts/hive-jdbc-1.1.0-cdh5.10.0.jar:/srv/deployment/analytics/refinery/artifacts/hive-service-1.1.0-cdh5.10.0.jar --class org.wikimedia.analytics.refinery.job.refine.tool.EvolveHiveTable /srv/deployment/analytics/refinery/artifacts/refinery-job-shaded.jar --table="${table}" --schema_uri="${schema_uri}" --dry_run=false
6. Add entry to wgEventStreams, wgEventLoggingStreamNames and wgEventLoggingSchemas in operations/mediwiki-config
Rolling deploy changes to make EventLogging extension produce data to EventGate. Example: https://gerrit.wikimedia.org/r/c/operations/mediawiki-config/+/607333/2/wmf-config/InitialiseSettings.php
To test that events make it through the pipeline:
Check that events for your stream are still flowing through in this Grafana dashboard, or by consuming the eventlogging_$old_schema_name topic from Kafka.
You can submit an event via a browser developer console:
old_schema_name=Test; event = { "OtherMessage" => "Hello from JS" } // example event here; mw.eventLog.logEvent(old_schema_name, event);
Or, if a server side PHP EventLogging event, you can emit a test event with mwscript shell.php on deployment.eqiad.wmnet:
cd /srv/mediawiki-staging mwscript shell.php --wiki testwiki >>> $old_schema_name = 'Test'; >>> $event = [ "OtherMessage" => "Hello from PHP" ]; >>> EventLogging::logEvent( $old_schema_name, -1, $event );
7. Once the legacy stream's data is fully produced through EventGate, switch to using Refine job that uses schema repo instead of meta.wm.org
Also add the SchemaName to the eventlogging-processor disabled schemas list in puppet in modules/eventlogging/files/plugins.py**
Example: https://gerrit.wikimedia.org/r/c/operations/puppet/+/644259
This will prevent eventlogging-processor from producing what are now invalid legacy events from clients that are running old code.
Restart eventlogging-processor on eventlog1003:
sudo puppet agent -t # make sure the change has been applied sudo service eventlogging-processor@client-side-* restart
8. Edit the producer extension.json and set EventLoggingSchemas to the new schema URI
Example: https://gerrit.wikimedia.org/r/c/mediawiki/extensions/ContentTranslation/+/639578
9. Once the producer extension.json is fully deployed, edit wgEventLoggingSchemas in operations/mediawiki-config InitialiseSettings.php and remove the schema's entry.
Example: https://gerrit.wikimedia.org/r/c/operations/mediawiki-config/+/639579
10. Mark the schema as migrated in the EventLogging Schema Migration Audit spreadsheet