Processing updates cannot be handled by a single flink operator slot. To support and hint parallel processing inside flink, the topic should be partitioned.
AC:
- SRE/ops: kafka-test is configured to use 5 partitions for topic (eqiad|codfw).cirrussearch.update_pipeline.update.rc0
- stream configuration for cirrussearch.update_pipeline.update.rc0 should define message_key_fields (see 983719)
- producer writes
explicitlykeyed records to the kafka sink - consumer reads
keyed records and reinterpretes them as keyed streampartitioned topic