Redis based job queues support delayed job execution. Implement dispatching based on this instead of relying on a cron job.
* Every edit schedules a delayed DispatchTriggerJob. That job is completely generic and holds no info at all, so DispatchTriggerJob of this kind are the same. This means that new jobs get ignored if there is already an older job waiting for execution.
* (option a) DispatchTriggerJob would poll the changes table, as we do now, and dispatch any pending changes to the most lagged wiki(s). This means that passes for long tail wikis will often end up doing nothing. If "doing nothing" is quick enough, we could simply go and look at the next wiki, until some minimum number of changes has been processed, or some maximum time has been exceeded.
* (option b) DispatchTriggerJob would take the next batch of changes, and send notifications for all of them to the interested wikis. That means that each pass has to (potentially) push to all wikis, which may take quite long.
* If there are still pending jobs or wikis to service, DispatchTriggerJob schedules another (delayed?) DispatchTriggerJob before it exists. How many new triggeres should be scheduled? We need to avoid starvation, but also prevent explosive growth of the number of trigger jobs.
**Whiteboard**: u=dev c=backend p=0