On Kubernetes workers ferm sometimes fails to restart (these restarts are e.g. triggered by Puppet if a central Ferm macro gets updated). One example:
Jan 03 19:30:08 mw1465 systemd[1]: Stopped ferm firewall configuration. Jan 03 19:30:08 mw1465 systemd[1]: Starting ferm firewall configuration... Jan 03 19:30:08 mw1465 ferm[1868235]: Starting Firewall: ferm Jan 03 19:30:08 mw1465 ferm[1868274]: Another app is currently holding the xtables lock. Perhaps you want to use the -w option? Jan 03 19:30:08 mw1465 ferm[1868238]: Failed to run /usr/sbin/iptables-legacy-restore Jan 03 19:30:08 mw1465 ferm[1868238]: Firewall rules rolled back. Jan 03 19:30:08 mw1465 ferm[1868281]: failed! Jan 03 19:30:08 mw1465 systemd[1]: ferm.service: Main process exited, code=exited, status=1/FAILURE Jan 03 19:30:08 mw1465 systemd[1]: ferm.service: Failed with result 'exit-code'. Jan 03 19:30:08 mw1465 systemd[1]: Failed to start ferm firewall configuration. Jan 08 15:18:08 mw1465 systemd[1]: Starting ferm firewall configuration...
These do not recover automatically with the subsequent Puppet run, apparently because this error condition does not get detected by ferm-status.
We could explore whether there's a way to pass -w to iptables-save/iptables-restore via Ferm (from a quick look that doesn't exist, needs a closer look at the sources)