Page MenuHomePhabricator

Intermittent connect timeout for CirrusSearch connections
Closed, ResolvedPublic

Description

https://logstash.wikimedia.org/goto/57d527a71a5b8b5b95281cdbd67aca07

Recently we switched the job runners from hhvm to php 7.2. Making this switch also changed how cirrussearch pools ssl connections from an hhvm internal curl pool to an external nginx instance performing proxying/pooling on localhost of each app server.

Since switching though we've seen a low volume, typically about 15 per 3 hours but as high as 300 per 3 hours, of 504 Gateway Time-out errors in the CirrusSearch application code. The overall volume is low thousands of requests per second, so this is a tiny fraction of requests, but still they shouldn't be timing out. Perhaps 2/3 of the timeouts clustered around 1s, but the other third vary from 3-10+s.

Network Path traveled:

mw app server -> nginx (localhost) -> lvs -> nginx (elastic) -> elasticsearch

connect timeoutread timeout
cirrussearch5-10s10s-120s
nginx (mw app)1s600s
nginx (elastic)60s180s

I've been able to correlate application timeout logs with /var/log/nginx/errors.log on the mw application servers, the typical matching log line is as follows. This suggests our timeout is most likely coming from the nginx->nginx tls connection with the 1s connect timeout.

2019/07/15 13:36:48 [error] 18363#18363: *133964233 upstream timed out (110: Connection timed out) while SSL handshaking to upstream, client: 127.0.0.1, server: , request: "POST /_msearch HTTP/1.1", upstream: "https://10.2.1.30:9243/_msearch", host: "localhost:14243"

As an initial fix, it looks like before we had a 5s connect timeout and the new proxy is using a 1s connect timeout. We can bump the connect timeout on the proxy as a temporary fix, but ideally that 1s connect timeout should be plenty.

Event Timeline

jijiki triaged this task as Normal priority.Jul 15 2019, 2:42 PM
jijiki added a project: serviceops.
jijiki updated the task description. (Show Details)
EBernhardson updated the task description. (Show Details)Jul 15 2019, 2:52 PM

Change 523194 had a related patch set uploaded (by EBernhardson; owner: EBernhardson):
[operations/puppet@production] Increase services proxy connect timeout to 5s

https://gerrit.wikimedia.org/r/523194

Change 523703 had a related patch set uploaded (by Effie Mouzeli; owner: Effie Mouzeli):
[operations/puppet@production] profile:service_proxy: Add more hiera variables

https://gerrit.wikimedia.org/r/523703

jijiki moved this task from Backlog to Doing on the serviceops board.Jul 17 2019, 7:21 AM

Change 523703 merged by Effie Mouzeli:
[operations/puppet@production] profile:service_proxy: Add more hiera variables

https://gerrit.wikimedia.org/r/523703

Change 523955 had a related patch set uploaded (by Effie Mouzeli; owner: Effie Mouzeli):
[operations/puppet@production] hieradata: Set connect_timeout for cirrussearch

https://gerrit.wikimedia.org/r/523955

Change 523194 abandoned by EBernhardson:
Increase services proxy connect timeout to 5s

Reason:
in favor of If64ae6fd2e2e5ebc3014773d206bb4f9968df673

https://gerrit.wikimedia.org/r/523194

Mentioned in SAL (#wikimedia-operations) [2019-07-18T10:15:12Z] <jijiki> Disable puppet on services_proxy hosts - T228063

Change 523955 merged by Effie Mouzeli:
[operations/puppet@production] hieradata: Set connect_timeout for cirrussearch

https://gerrit.wikimedia.org/r/523955

Mentioned in SAL (#wikimedia-operations) [2019-07-18T10:37:49Z] <jijiki> enable puppet on services_proxy hosts - T228063

jijiki closed this task as Resolved.Jul 18 2019, 10:40 AM

Let's reopen if the issue persists.

EBernhardson reopened this task as Open.Jul 30 2019, 4:16 PM

Checked back into this and it's looking much better. july 16 had 2500 gateway timeouts per 12 hours, since deploying the highest 12 hour period is 250 gateway timeouts. Might be worth continuing to look into, but knocking this down an order of magnitude is probably sufficient.

EBernhardson closed this task as Resolved.Jul 30 2019, 4:17 PM

Could you briefly summarise the impact of such backend timeout for Ciruss? E.g. is there a retry or fallback? Does it affect responses to end users, and if so is that response non-fatal non-500 and localised? If affecting things like POST requests or jobs, are they critical or eventually consistent? Thanks :)

It all depends on which code path gets the gateway timeout. Essentially all possible places cirrus talks to elasticsearch can end up erroring out. In terms of the most common code paths:

  • user searches are never retried. Depending on what called the search and what kind of search it was doing it might give the user an error message, or it might pretend no results exist for secondary searches.
  • Some page updates are retried, some are ignored, depending on what the data update was. Actual content updates gets thrown back into the job queue to be retried, but metrics updates that are used for ranking generally get thrown out.