Several times, I've encountered a huge message for CirrusSearch related job queue topics. These are causing MirrorMaker instances to fail, because these messages are larger than our `max.request.size` of 5.5MB. Our supposed configured Kafka max.message.size is 4MB. An individual produce request might be larger than 4MB due to metadata, etc. but not 1.5MB larger. I've increased `max.request.size` several times since encountering this, but now I think it is time to figure out how these messages are making it in to Kafka in the first place.
Somehow, an 8MB message is making it into the cirrusSearchElasticaWrite job topic. This shouldn't happen.
```
[2018-05-31 01:09:05,546] 311676 [mirrormaker-thread-5] ERROR org.apache.kafka.clients.producer.internals.ErrorLoggingCallback - Error when sending message to topic eqiad.mediawiki.job.cirrusSearchElasticaWrite with key: null, value: 8619618 bytes with error:
org.apache.kafka.common.errors.RecordTooLargeException: The message is 8619706 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
```