We should at least have some volume threshold of data being sanitized so as not to miss hours not sanitized. Ideally in grapghite.
Alarms via refinemonitor I think cover hadoop sanitization as if raw data is present in the eventlogging raw directories refine monitor will alarm if those have not been refined. Now, do we equally alarm if camus has not been able to pull data from kafka for a given time range? cc @Ottomata to confirm