As described in T355536, we want to automatize the update of the MediaWiki history snapshot in Druid.
For that, we need Airflow and AQS to communicate.
The medium of choice is a Cassandra table, since we already have tools in place to load to Cassandra from Airflow, and to read Cassandra from AQS.
Druid was also a similar option, however the key-value nature of Cassandra fits the purpose better than the cube-aggregation nature of Druid.
This task is about defining a good schema for the Cassandra table together with Data Persistence team,
and creating an Airflow DAG to load the latest MediaWiki reduced snapshot to the mentioned table.