Currently, contint1001.wikimedia.org-Monthly-1st-Sun-production-contint job is running. I detected an unusual amount of disk space used on backup1001, which is weird, because no backup data should happen on the bacula director.
I checked and the cause seems to be a huge amount of file attributes being generated on the director (>17 million), which will probably be stored on the database for later recovery. While a large amount of files normally can be scaled to, no problem, this seems to be an unusual increase in the amount of files backed up, to the point of being noticeable on backup1001 disk space graphs (backup1001 doesn't store any file content!). This is new, and didn't happen the last time a full backup of contint1001 was run. A huge amount of metadata (normally due to a large amount of files backed up) is not an issue by itself, but it probably will require a large amount of time to recover from it.
Because this is a new pattern (almost 6 GB of file metadata-just the titles and timestamps) I wonder if this is accidental. Maybe backups are happening of paths that are not intended, a new service is being backed up, or a large amount of files has been accidentally generated. In any case, either removing the file in origin (if they happen to be leftovers), tarring (archiving) old unused files, creating ignore filters on backups for non useful files, or splitting the backup process into a few independent paths may help speeding up (or in a worse case scenario, making practically possible) later recoveries.
For example, s3 databases usually would have hundreds of thousands of files, one per table and per database, so knowing that most files will only be required individual access per databas, we tar them in just a few thousand files, which are faster to backup and recover later, rather than parsing many files (which can take a lot, as they have to be written and read to the database).
This requires research first.