Data access guidelines read and acknowledged.
Tue, Mar 10
Yes, this is totally something I can tackle. It meshes well with the work I'm doing on converting ArcLamp to use Swift for its storage. (That task is slightly simplified by now knowing that X-Delete-After exists.)
Feb 18 2020
"Better" depends on what's being measured. CLOCK_MONOTONIC will always move forward, at a rate that's designed to mimic the passage of time in the real world. This means the NTP daemon will speed it up or slow it down as it notices the system clock drifting away from its reference time sources. (But unlike CLOCK_REALTIME, it will never be moved backwards, and will never "jump" due to a leap second or the user manually adjusting the system time.) If we're measuring something happening in the real world (e.g. waiting on the network or a hard drive), this is going to be the most accurate measure.
Thinking about this a bit further, there might be cases where we want access to CLOCK_MONOTONIC_RAW instead of CLOCK_MONOTONIC, which would point us towards wanting to build our own extension.
Following up on our discussion at today's team meeting, I looked at the linked PHP commit. hrtime() is just a wrapper around clock_gettime(CLOCK_MONOTONIC). This could rather trivially be backported into a PHP C extension for use with earlier versions.
Feb 14 2020
I submitted a patch which I *think* does what's needed to create the user, less the private keys. I don't know if there's more to it than this, but hopefully it's a starting point.
Feb 12 2020
Looking at yesterday's (2020-02-11) output, it was about 8 GB of (uncompressed) logs and 14 MB of SVGs, and about 800 files total. We can control the sampling interval to regulate how big these get, so let's assume it's relatively constant. I'll have to check if there's a reason we don't compress the logs; I feel like we should, which would dramatically reduce this. (I just now tried gzip -1 on one set of logs, and they went from 4 GB to 479 MB.)
Feb 10 2020
Feb 8 2020
Well, now we know the alert we set up last week is working. :)
Feb 7 2020
puppetdb on deployment-puppetdb03 was killed by kernel OOM at Feb 7 09:50:29, per syslog. I just now ran systemctl start puppetdb on that host, to fix puppet issues in beta.
Feb 3 2020
I think the consensus is that this should be running via a better resource scheduler than cron in a VM.
Jan 21 2020
Jan 13 2020
I was able to request a cloak after the restart, so I think that fixed it.