For example for the revision https://uk.wikipedia.org/w/index.php?title=Вікіпедія:Шафа&oldid=21337479 the RecordLintJob serialized in JSON takes 5.6M and this is not the largest example.
For the Kafka #JobQueue this is just too big, but I think it's too big for any queue implementation.
Can we, perhaps, limit the number of linting errors per job and split a huge job into smaller jobs in case it exceeds the limit