Page MenuHomePhabricator

Toolschecker webservice checks get out of sync likely from timeouts
Closed, ResolvedPublic0 Estimated Story Points

Description

Toolschecker has two URLs that check webservice functionality in Toolforge. Both of them are a bit unstable in behavior. It appears that they try to stop the services, but the actual job or k8s replica set end up still running with a blank service.manifest file. This is not ideal and triggers alerts. It may be as simple as extending some timeout somewhere.

Problem URLs:
http://checker.tools.wmflabs.org/webservice/gridengine
http://checker.tools.wmflabs.org/webservice/kubernetes

Event Timeline

Bstorm created this task.

Just found that there's regular stack traces related to this -- we need a longer timeout:
May 3 06:25:12 tools-sgecron-01 collector-runner[3819]: 2019-05-03 06:25:12,728 Starting webservice for tool dplbot
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: 2019-05-03 06:25:35,477 Timed out attempting to start webservice for tool dplbot
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: Traceback (most recent call last):
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: File "/usr/lib/python3.5/subprocess.py", line 385, in run
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: stdout, stderr = process.communicate(input, timeout=timeout)
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: File "/usr/lib/python3.5/subprocess.py", line 801, in communicate
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: stdout, stderr = self._communicate(input, endtime, timeout)
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: File "/usr/lib/python3.5/subprocess.py", line 1447, in _communicate
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: self._check_timeout(endtime, orig_timeout)
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: File "/usr/lib/python3.5/subprocess.py", line 829, in _check_timeout
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: raise TimeoutExpired(self.args, orig_timeout)
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: subprocess.TimeoutExpired: Command '['/usr/bin/sudo', '-i', '-u', 'tools.dplbot', '/usr/bin/webservice', 'restart']' timed out after 15 seconds
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: During handling of the above exception, another exception occurred:
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: Traceback (most recent call last):
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: File "/usr/lib/python3/dist-packages/tools/manifest/webservicemonitor.py", line 183, in _start_webservice
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: subprocess.check_output(command, timeout=15) # 15 second timeout!
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: File "/usr/lib/python3.5/subprocess.py", line 316, in check_output
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: **kwargs).stdout
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: File "/usr/lib/python3.5/subprocess.py", line 390, in run
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: stderr=stderr)
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: subprocess.TimeoutExpired: Command '['/usr/bin/sudo', '-i', '-u', 'tools.dplbot', '/usr/bin/webservice', 'restart']' timed out after 15 seconds
May 3 06:25:35 tools-sgecron-01 collector-runner[3819]: 2019-05-03 06:25:35,544 Service monitor run completed, 0 webservices restarted

Change 524610 had a related patch set uploaded (by Jhedden; owner: Jhedden):
[operations/puppet@production] toolschecker: check for existing webservice

https://gerrit.wikimedia.org/r/524610

The recent webservice critical status was related to existing webservice instances left running. When concurrent requests from both icinga1001.wikimedia.org and icinga2001.wikimedia.org are made to the webservice endpoint they can leave the webservice instance running, causing the checks to fail going forward.

There's still the timeout issue Brooke noted, but hopefully the patch ^ gets us closer to more green on the dashboard.

Change 524610 merged by Jhedden:
[operations/puppet@production] toolschecker: check for existing webservice

https://gerrit.wikimedia.org/r/524610

The current configuration is set to check every 1 minute and retry every 1 minute after a failure.

k8s webservice checker run time

end timeicinga hostseconds ran
13:50:24208.80.154.8423.954
13:50:54208.80.153.7430.088
13:51:24208.80.154.8421.378
13:52:53208.80.153.7489.403
13:53:25208.80.154.8431.383
13:53:54208.80.153.7429.301

sge webservice checker run time

end timeicinga hostseconds ran
13:50:50208.80.153.7422.996
13:51:20208.80.154.8417.803
13:51:50208.80.153.7421.818
13:52:37208.80.154.8427.836
13:52:50208.80.153.7413.283
13:53:36208.80.154.8418.742

A 1 minute interval looks and feels a bit excessive for this. I think a 5 minute interval would allow for both icinga instances enough time to properly check the backends, while still maintaining a good window for service interruption notifications.

That sounds like a good idea to me. I look forward to the discussion.

Change 525108 had a related patch set uploaded (by Jhedden; owner: Jhedden):
[operations/puppet@production] icinga: update toolschecker webservice interval

https://gerrit.wikimedia.org/r/525108

Change 525108 merged by Jhedden:
[operations/puppet@production] icinga: update toolschecker webservice interval

https://gerrit.wikimedia.org/r/525108

The webservice checks are getting better, but it ran into a new failure:

Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: Traceback (most recent call last):
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:   File "/usr/bin/webservice", line 169, in <module>
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:     start(job, 'Starting webservice')
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:   File "/usr/bin/webservice", line 61, in start
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:     job.request_start()
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:   File "/usr/lib/python2.7/dist-packages/toollabs/webservice/backends/kubernetesbackend.py", line 456, in request_start
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:     pykube.Deployment(self.api, self._get_deployment()).create()
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:   File "/usr/lib/python2.7/dist-packages/pykube/objects.py", line 76, in create
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:     self.api.raise_for_status(r)
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:   File "/usr/lib/python2.7/dist-packages/pykube/http.py", line 104, in raise_for_status
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:     raise HTTPError(payload["message"])
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: pykube.exceptions.HTTPError: client: etcd member https://tools-k8s-etcd-01.tools.eqiad.wmflabs:2379 has no leader
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: --------------------------------------------------------------------------------
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: ERROR in toolschecker [/var/lib/toolschecker/toolschecker.py:454]:
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: webservice kubernetes: error starting
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: --------------------------------------------------------------------------------
...
Jul 23 16:53:43 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: --------------------------------------------------------------------------------
Jul 23 16:53:43 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: ERROR in toolschecker [/var/lib/toolschecker/toolschecker.py:448]:
Jul 23 16:53:43 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: webservice kubernetes: found existing webservice running
Jul 23 16:53:43 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: --------------------------------------------------------------------------------

After hitting that error toolschecker.py exited, but the k8s webservice was still created. I'll look for an underlying ETCD issue, and/or add better exception handling to toolschecker.py

It seems rare, but I've also noticed a few timeouts from SGE: 2019-07-26T19:29:42.700456 Timed out attempting to start webservice (15s)

It seems rare, but I've also noticed a few timeouts from SGE: 2019-07-26T19:29:42.700456 Timed out attempting to start webservice (15s)

I think this one could represent a real issue, although probably transient, with the grid scheduler. If it pops up a lot but is transient we might want to rethink the arbitrary choice of the 15s wait time.

The webservice checks are getting better, but it ran into a new failure:

Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: Traceback (most recent call last):
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:   File "/usr/bin/webservice", line 169, in <module>
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:     start(job, 'Starting webservice')
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:   File "/usr/bin/webservice", line 61, in start
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:     job.request_start()
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:   File "/usr/lib/python2.7/dist-packages/toollabs/webservice/backends/kubernetesbackend.py", line 456, in request_start
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:     pykube.Deployment(self.api, self._get_deployment()).create()
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:   File "/usr/lib/python2.7/dist-packages/pykube/objects.py", line 76, in create
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:     self.api.raise_for_status(r)
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:   File "/usr/lib/python2.7/dist-packages/pykube/http.py", line 104, in raise_for_status
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]:     raise HTTPError(payload["message"])
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: pykube.exceptions.HTTPError: client: etcd member https://tools-k8s-etcd-01.tools.eqiad.wmflabs:2379 has no leader
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: --------------------------------------------------------------------------------
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: ERROR in toolschecker [/var/lib/toolschecker/toolschecker.py:454]:
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: webservice kubernetes: error starting
Jul 23 16:50:32 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: --------------------------------------------------------------------------------
...
Jul 23 16:53:43 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: --------------------------------------------------------------------------------
Jul 23 16:53:43 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: ERROR in toolschecker [/var/lib/toolschecker/toolschecker.py:448]:
Jul 23 16:53:43 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: webservice kubernetes: found existing webservice running
Jul 23 16:53:43 tools-checker-03 uwsgi-toolschecker_webservice_kubernetes[7908]: --------------------------------------------------------------------------------

After hitting that error toolschecker.py exited, but the k8s webservice was still created. I'll look for an underlying ETCD issue, and/or add better exception handling to toolschecker.py

Whoah! That's a good find. Right now it reports healthy, but it may have been totally on the fritz for a bit without us noticing. Tuesday was when we had that weird neutron/rabbitmq issue where we had restart things on cloudcontrol1004. I'm not sure that affects this, though.

A quick check shows:

~$ etcdctl -C https://tools-k8s-etcd-02.tools.eqiad.wmflabs:2379 cluster-health
member 3aeee1b1187b0349 is healthy: got healthy result from https://tools-k8s-etcd-01.tools.eqiad.wmflabs:2379
member 815d76414e1d3b20 is healthy: got healthy result from https://tools-k8s-etcd-02.tools.eqiad.wmflabs:2379
member dc1848ea7893bc8b is healthy: got healthy result from https://tools-k8s-etcd-03.tools.eqiad.wmflabs:2379
cluster is healthy

It makes me wonder what it was doing then. Ugh, the firewall drop logging is infuriatingly noisy.

The few I spot checked also lined up with timeouts in the etcd server log:

Jul 25 05:10:09 tools-k8s-etcd-02 etcd[6953]: got unexpected response error (etcdserver: request timed out)

tools-k8s-etcd-01:~$ sudo zgrep -c "got unexpected response error (etcdserver: request timed out)" /var/log/etcd.log*gz
/var/log/etcd.log-20190716.gz:58
/var/log/etcd.log-20190717.gz:67
/var/log/etcd.log-20190718.gz:70
/var/log/etcd.log-20190719.gz:0
/var/log/etcd.log-20190720.gz:0
/var/log/etcd.log-20190721.gz:14
/var/log/etcd.log-20190722.gz:14
/var/log/etcd.log-20190723.gz:37
/var/log/etcd.log-20190724.gz:14
tools-k8s-etcd-02:~$ sudo zgrep -c "got unexpected response error (etcdserver: request timed out)" /var/log/etcd.log*gz
/var/log/etcd.log-20190716.gz:14
/var/log/etcd.log-20190717.gz:67
/var/log/etcd.log-20190718.gz:91
/var/log/etcd.log-20190719.gz:0
/var/log/etcd.log-20190720.gz:0
/var/log/etcd.log-20190721.gz:19
/var/log/etcd.log-20190722.gz:4
/var/log/etcd.log-20190723.gz:40
/var/log/etcd.log-20190724.gz:16
tools-k8s-etcd-03:~$ sudo zgrep -c "got unexpected response error (etcdserver: request timed out)" /var/log/etcd.log*gz
/var/log/etcd.log-20190716.gz:7
/var/log/etcd.log-20190717.gz:33
/var/log/etcd.log-20190718.gz:23
/var/log/etcd.log-20190719.gz:0
/var/log/etcd.log-20190720.gz:0
/var/log/etcd.log-20190721.gz:7
/var/log/etcd.log-20190722.gz:0
/var/log/etcd.log-20190723.gz:0
/var/log/etcd.log-20190724.gz:11

I reset the downtime for both the gridengine and kubernetes checks to last until 2019-09-02 after this paged again at 2019-08-05T17:11.

Change 528292 had a related patch set uploaded (by Jhedden; owner: Jhedden):
[operations/puppet@production] toolschecker: webservice final process status

https://gerrit.wikimedia.org/r/528292

Change 528292 merged by Jhedden:
[operations/puppet@production] toolschecker: Ensure webservice is fully stopped

https://gerrit.wikimedia.org/r/528292

Mentioned in SAL (#wikimedia-cloud) [2019-08-06T13:43:58Z] <jeh> disabling puppet on tools-checker-03 while testing nginx timeouts T221301

Change 528892 had a related patch set uploaded (by Jhedden; owner: Jhedden):
[operations/puppet@production] toolschecker: match nginx and wsgi timeouts

https://gerrit.wikimedia.org/r/528892

Fixed NGINX timeouts to match WSGI and added better status checking after issuing webservice commands.

With extra logging enabled here's the full process:

17:01:20 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:20 uwsgi-toolschecker_webservice_gridengine[25724]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:471]:
17:01:20 uwsgi-toolschecker_webservice_gridengine[25724]: webservice gridengine: confirmed status
17:01:20 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:20 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:20 uwsgi-toolschecker_webservice_gridengine[25724]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:455]:
17:01:20 uwsgi-toolschecker_webservice_gridengine[25724]: webservice gridengine: start
17:01:20 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:471]:
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: webservice gridengine: confirmed start
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:505]:
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: webservice gridengine: Response at 0
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:455]:
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: webservice gridengine: stop
17:01:23 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:35 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
17:01:35 uwsgi-toolschecker_webservice_gridengine[25724]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:471]:
17:01:35 uwsgi-toolschecker_webservice_gridengine[25724]: webservice gridengine: confirmed stop
17:01:35 uwsgi-toolschecker_webservice_gridengine[25724]: --------------------------------------------------------------------------------
16:58:02 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:02 uwsgi-toolschecker_webservice_kubernetes[25754]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:471]:
16:58:02 uwsgi-toolschecker_webservice_kubernetes[25754]: webservice kubernetes: confirmed status
16:58:02 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:02 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:02 uwsgi-toolschecker_webservice_kubernetes[25754]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:455]:
16:58:02 uwsgi-toolschecker_webservice_kubernetes[25754]: webservice kubernetes: start
16:58:02 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:18 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:18 uwsgi-toolschecker_webservice_kubernetes[25754]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:471]:
16:58:18 uwsgi-toolschecker_webservice_kubernetes[25754]: webservice kubernetes: confirmed start
16:58:18 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:32 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:32 uwsgi-toolschecker_webservice_kubernetes[25754]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:505]:
16:58:32 uwsgi-toolschecker_webservice_kubernetes[25754]: webservice kubernetes: Response at 7
16:58:32 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:32 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:32 uwsgi-toolschecker_webservice_kubernetes[25754]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:455]:
16:58:32 uwsgi-toolschecker_webservice_kubernetes[25754]: webservice kubernetes: stop
16:58:32 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:34 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------
16:58:34 uwsgi-toolschecker_webservice_kubernetes[25754]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:471]:
16:58:34 uwsgi-toolschecker_webservice_kubernetes[25754]: webservice kubernetes: confirmed stop
16:58:34 uwsgi-toolschecker_webservice_kubernetes[25754]: --------------------------------------------------------------------------------

Side note: I'm not convinced this is the right way to monitor the webservice process. Instead of the URL triggering and waiting for webservice creations, I think it could be better handled with dedicated "canary" type service that icinga queries for status.

Change 528897 had a related patch set uploaded (by Jhedden; owner: Jhedden):
[operations/puppet@production] toolschecker: check status for webservice tasks

https://gerrit.wikimedia.org/r/528897

Ran into a new failure scenario on gridengine, might be a false positive but it did cause the webservice to remain running:

queue instance "webgrid-lighttpd@tools-sgewebgrid-lighttpd-0923.tools.eqiad.wmflabs" dropped because it is overloaded: np_load_avg=2.757500 (= 2.757500 + 0.50 * 0.000000 with nproc=4) >= 2.75

And an instance where the webservice wasn't stopped on k8s:

Aug  7 18:54:08 uwsgi-toolschecker_webservice_kubernetes[29541]: --------------------------------------------------------------------------------
Aug  7 18:54:08 uwsgi-toolschecker_webservice_kubernetes[29541]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:499]:
Aug  7 18:54:08 uwsgi-toolschecker_webservice_kubernetes[29541]: webservice kubernetes: Response at 24
Aug  7 18:54:08 uwsgi-toolschecker_webservice_kubernetes[29541]: --------------------------------------------------------------------------------
Aug  7 18:54:08 uwsgi-toolschecker_webservice_kubernetes[29541]: --------------------------------------------------------------------------------
Aug  7 18:54:08 uwsgi-toolschecker_webservice_kubernetes[29541]: INFO in toolschecker [/var/lib/toolschecker/toolschecker.py:455]:
Aug  7 18:54:08 uwsgi-toolschecker_webservice_kubernetes[29541]: webservice kubernetes: stop
Aug  7 18:54:08 uwsgi-toolschecker_webservice_kubernetes[29541]: --------------------------------------------------------------------------------
Aug  7 18:55:02 uwsgi-toolschecker_webservice_kubernetes[29541]: --------------------------------------------------------------------------------
Aug  7 18:55:02 uwsgi-toolschecker_webservice_kubernetes[29541]: ERROR in toolschecker [/var/lib/toolschecker/toolschecker.py:470]:
Aug  7 18:55:02 uwsgi-toolschecker_webservice_kubernetes[29541]: webservice kubernetes: stop failed status check expected:is not running found:Your webservice of type php7.2 is running
Aug  7 18:55:02 uwsgi-toolschecker_webservice_kubernetes[29541]: --------------------------------------------------------------------------------
Aug  7 18:55:02 uwsgi-toolschecker_webservice_kubernetes[29541]: --------------------------------------------------------------------------------
Aug  7 18:55:02 uwsgi-toolschecker_webservice_kubernetes[29541]: ERROR in toolschecker [/var/lib/toolschecker/toolschecker.py:512]:
Aug  7 18:55:02 uwsgi-toolschecker_webservice_kubernetes[29541]: webservice kubernetes: verification failed
Aug  7 18:55:02 uwsgi-toolschecker_webservice_kubernetes[29541]: --------------------------------------------------------------------------------

We discussed the matter and felt as a team that these are not the right way to be monitoring the customer experience tools we have for Toolforge. We decided to remove the icinga monitors and create a subtask to implement a more sensible monitor for this.

Change 528892 abandoned by Jhedden:
toolschecker: match nginx and wsgi timeouts

https://gerrit.wikimedia.org/r/528892

Change 528897 abandoned by Jhedden:
toolschecker: check status for webservice tasks

https://gerrit.wikimedia.org/r/528897

Change 533310 had a related patch set uploaded (by Jhedden; owner: Jhedden):
[operations/puppet@production] toolschecker: remove webservice grid and k8s check

https://gerrit.wikimedia.org/r/533310

Change 533310 merged by Jhedden:
[operations/puppet@production] toolschecker: remove webservice grid and k8s check

https://gerrit.wikimedia.org/r/533310

Icinga checks for the webservice have been removed.