Page MenuHomePhabricator

mendelevium (otrs) running out of inodes
Closed, ResolvedPublic

Description

Noticed today after the check for free inodes was deployed, 5% inodes free on mendelevium

https://grafana.wikimedia.org/dashboard/db/prometheus-machine-stats?panelId=12&fullscreen&orgId=1&var-server=mendelevium&var-datasource=eqiad%20prometheus%2Fops&from=1493139669827&to=1500915669827

Looks like there's a periodic cleanup going though the machine will eventually run out, a possible temporary fix is to grow the filesystem

Related Objects

Event Timeline

It looks like previous otrs versions in /opt are using the inodes

mendelevium:/opt$ df -i /
Filesystem      Inodes   IUsed IFree IUse% Mounted on
/dev/vda1      1577968 1498012 79956   95% /

mendelevium:/opt$ sudo du -s --inodes *
1	otrs
45887	otrs-3.2.14
45888	otrs-3.2.14.bak
3829	otrs-3.3.14
3962	otrs-4.0.13
14452	otrs-5.0.1
295054	otrs-5.0.13
200809	otrs-5.0.13.bak
99401	otrs-5.0.19
9876	otrs-5.0.2
208854	otrs-5.0.20
4647	otrs-5.0.4
87833	otrs-5.0.6
364330	otrs-5.0.7
1384874	total

tar/gzipping a couple previous versions has brought inode utilization down to 80%

mendelevium:/opt$ df -i /
Filesystem      Inodes   IUsed  IFree IUse% Mounted on
/dev/vda1      1577968 1251616 326352   80% /

mendelevium:/opt$ ls -hal *tar.gz
-rw-r--r-- 1 root root 26M Jul 24 17:23 otrs-3.2.14.tar.gz
-rw-r--r-- 1 root root 45M Jul 24 17:32 otrs-5.0.13.bak.tar.gz

Also, just in case, I created a tmpfs at /mnt/tmp and moved the original directories there instead of deleting them. These will disappear on the next reboot, or if someone unmounts it manually.

mendelevium:/opt$ df -h /mnt/tmp
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.5G  1.2G  378M  76% /mnt/tmp

mendelevium:/opt$ ls -hal /mnt/tmp
total 4.0K
drwxrwxrwt 4 root root       80 Jul 24 17:34 .
drwxr-xr-x 3 root root     4.0K Jul 24 17:25 ..
drwxr-xr-x 8 otrs otrs      420 Sep 28  2015 otrs-3.2.14
drwxr-xr-x 9 otrs www-data  460 Sep 14  2016 otrs-5.0.13.bak

Longer term I think we should migrate /opt onto a larger filesystem.

herron triaged this task as Medium priority.Jul 24 2017, 6:10 PM

Mentioned in SAL (#wikimedia-operations) [2017-09-16T22:05:45Z] <godog> compress older otrs directories to reclaim inodes - T171490

The growth of used inodes since a few hours was pretty steep, I compressed and removed the older otrs versions:

otrs-5.0.13 otrs-5.0.19 otrs-5.0.7 otrs-5.0.6 otrs-3.2.14.bak otrs-3.3.14 otrs-4.0.13 otrs-5.0.1 otrs-5.0.2

Thanks! From the looks of it, this peaked at 95% at 18:36 on the 16th. Seems like there was a spam campaign that same day which caused some issues but is not directly related to this task. OTRS does not currently store anything but logs on the filesystem,. Exim will temporarily stores emails in case of temporary failure, causing inode use which probably happened this time around but not enough to cause inode depletion. Also, all ticket data, alongside attachments is stored in the database. Which is an anti-pattern in itself and tracked in T138915, but it means creation of a ticket does not consume inodes. Thanks for deleting all that data, I 've deleted some myself as well. I am gonna mark this as resolved for now. inode and space usage is quite low and whatever usage was there before was due to leftovers.

akosiaris claimed this task.