Page MenuHomePhabricator

Alert "kubelet operational latencies"
Closed, ResolvedPublic


For the past 2 days, the following is flooding wikimedia-operations for about 3 hours long, twice on both days:

PROBLEM - kubelet operational latencies on kubernetes1001 is CRITICAL: instance=kubernetes1001.eqiad.wmnet

PROBLEM - kubelet operational latencies on kubernetes2004 is CRITICAL: instance=kubernetes2004.codfw.wmnet

PROBLEM - kubelet operational latencies on kubernetes1003 is CRITICAL: instance=kubernetes1003.eqiad.wmnet

PROBLEM - kubelet operational latencies on kubernetes2001 is CRITICAL: instance=kubernetes2001.codfw.wmnet

Thresholds in Puppet (source): 300 ms (warning), 450 ms (critical).


24h.png (584×978 px, 129 KB)
Last 24 hours
7d.png (634×1 px, 118 KB)
7d-both.png (648×1 px, 119 KB)
Last 7 daysEqiad/Codfw (last 7 days)

Event Timeline

Minor suggestion, perhaps we could increase the alert threshold if operation isn't actually affected at these levels. Quite often kubelet will sit on the alert threshold and flap alerts.

Mentioned in SAL (#wikimedia-operations) [2019-04-02T16:00:23Z] <mutante> icinga - schedule (30d) downtime for kubernetes operational latencies alerts (T219696) on kubernetes1004

akosiaris closed this task as Resolved.EditedApr 3 2019, 7:54 PM

Culprit identified.

On Thu Mar 28 15:07:55 2019 a new version of the eventgate-analytics chart was deployed to both codfw and eqiad. That new version introduced a different readinessProbe. That probe was added in 66a62a59a6a7d050aa8a. This is a valid use case and pattern and was thoroughly discussed before being introduced. That form of the readiness probe uses the exec_sync operation. The latencies across all hosts increased for this operation, from ~0 to ~2.5s. This is unknown ground for us, but the value doesn't seem unreasonable given that it involves producing a test event to kafka. I think that bacbc62d90 is indeed the way to go on this one. Thanks @crusnov.