Page MenuHomePhabricator

Ensure that WDQS query throttling does not interfere with federation
Closed, ResolvedPublic3 Estimated Story Points

Description

When we exposed the 3 experimental endpoints to test the first version of the graph split we disabled query throttling to avoid impacting the various analyses we had to run to evaluate the impact of the split.
We then realized while analyzing what happens when federated queries are running that this throttling mechanism might have a negative impact by having wdqs nodes throttling each other.

This ticket is about finding a plan to ensure that query throttling does not interfere with federation.

A simple approach would be that the wdqs machine receiving the traffic is going to be responsible for throttling the client, subsequent queries made internally as part of federation would be un-throttled. Nodes serving federated results to other nodes should still remain protected by the frontend node answering to the client.

To achieve this we need to detect when a query is emitted from another query service and craft a header at the nginx level to inform the throttling servlet that it should not be activated.
Such headers exist but sadly the throttling filter does re-use the existing X-BIGDATA-READ-ONLY which is having another purpose so cannot be re-used in our context (it would be too dangerous).

One approach could be to use a new header X-Disable-Throttling dedicated for this purpose the nginx settings would have to be adapted to set X-Disable-Throttling when the query is emitted from from another blazegraph node. Unfortunately this might start to throttle local requests made directly on the blazegraph port (updates) which would then be prone to throttling and would have to be adapted to set this header (streaming-updater-consumer, data import scripts).

Another approach is to adapt the throttling servlet and change how it's configured adding a new config disable-throttling-if-header such that a request with:

  • X-BIGDATA-READ-ONLY: 1 and X-Disable-Throttling: true would disable throttling
  • X-BIGDATA-READ-ONLY: 1 only would enable throttling
  • a request without any these headers would not enable throttling

AC:

  • decide on the approach
  • blazegraph does not throttle itself when running federated queries

Event Timeline

dcausse renamed this task from Ensure that WDQS query throttling do not interfere with federation to Ensure that WDQS query throttling does not interfere with federation.Apr 5 2024, 3:29 PM
Gehel triaged this task as High priority.Apr 15 2024, 1:22 PM
Gehel set the point value for this task to 3.Apr 15 2024, 3:43 PM
TJones removed the point value for this task.
TJones set the point value for this task to 3.

The second option, making throttling conditional on X-BIGDATA-READ-ONLY makes sense to me. It's perhaps a little awkward to make generic and document, but shouldn't be too bad.

Change #1041789 had a related patch set uploaded (by Ebernhardson; author: Ebernhardson):

[wikidata/query/rdf@master] Use a simple expression syntax for defining throttling headers

https://gerrit.wikimedia.org/r/1041789

Change #1041789 merged by jenkins-bot:

[wikidata/query/rdf@master] Use a simple expression syntax for defining throttling headers

https://gerrit.wikimedia.org/r/1041789

We now have a flexible way to define if throttling should be enabled based on the presence or absense of various http headers and need to define what headers will be used. One concern with an X-Disable-Throttling header is we need some way to ensure that the header cannot be provided by arbitrary users, or we need to start getting more complicated with passing secrets around to ensure only requests with the secret token can disable throttling.

It looks like all requests that come through varnish have an x-varnish: <number> header. The expression x-bigdata-read-only && x-varnish would only throttle requests that came through the network edges. That might be reasonable for our use case?

I did some testing and sadly when a wdqs node makes a query to https://query.wikidata.org it hits varnish again:
from wdqs1020 to https://query.wikidata.org (echo 'SELECT ?test_dcausse { ?test_dcausse ?p ?o . } LIMIT 1' | curl -f -s --data-urlencode query@- https://query.wikidata.org/sparql?format=json)

"x-request-id": "b34bb930-ef85-4b23-956e-7dcb11f0f7ec",
"content-length": "99",
"x-forwarded-proto": "http",
"x-client-port": "40256",
"x-bigdata-max-query-millis": "60000",
"x-wmf-nocookies": "1",
"x-client-ip": "2620:0:861:10a:10:64:131:24",
"x-varnish": "800949377",
"x-forwarded-for": "2620:0:861:10a:10:64:131:24\\, 10.64.0.79\\, 2620:0:861:10a:10:64:131:24",
"x-requestctl": "",
"x-cdis": "pass",
"accept": "*/*",
"x-real-ip": "2620:0:861:10a:10:64:131:24",
"via-nginx": "1",
"x-bigdata-read-only": "yes",
"host": "query.wikidata.org",
"content-type": "application/x-www-form-urlencoded",
"connection": "close",
"x-envoy-expected-rq-timeout-ms": "65000",
"x-connection-properties": "H2=1; SSR=0; SSL=TLSv1.3; C=TLS_AES_256_GCM_SHA384; EC=UNKNOWN;",
"user-agent": "curl/7.74.0"

which is very similar to when querying from outside the network

"x-request-id": "3380f86f-99bc-4f0a-ac74-48e60317836d",
"content-length": "85",
"x-forwarded-proto": "http",
"x-client-port": "55334",
"x-bigdata-max-query-millis": "60000",
"x-wmf-nocookies": "1",
"x-client-ip": "redacted",
"x-varnish": "512603614",
"x-forwarded-for": "redacted\\, 10.136.1.11\\, 2620:0:861:10e:10:64:135:23",
"x-requestctl": "",
"x-cdis": "pass",
"accept": "*/*",
"x-real-ip": "2620:0:861:10e:10:64:135:23",
"via-nginx": "1",
"x-bigdata-read-only": "yes",
"host": "query.wikidata.org",
"content-type": "application/x-www-form-urlencoded",
"connection": "close",
"x-envoy-expected-rq-timeout-ms": "65000",
"x-connection-properties": "H2=1; SSR=0; SSL=TLSv1.3; C=TLS_AES_256_GCM_SHA384; EC=UNKNOWN;",
"user-agent": "curl/7.81.0"

If querying lvs via wdqs.discovery.wmnet directly we might have what we'd need (echo 'SELECT ?lvs_eqiad_test_dcausse {?lvs_eqiad_test_dcausse ?p ?o .} LIMIT 1' | curl -v -f -s --data-urlencode query@- https://wdqs.discovery.wmnet/sparql?format=json)

"x-real-ip": "2620:0:861:10a:10:64:131:24",
"x-request-id": "ef9b0e66-3b6f-48ae-a36f-cb1e67f93950",
"content-length": "110",
"x-forwarded-proto": "http",
"x-bigdata-read-only": "yes",
"host": "wdqs.discovery.wmnet",
"x-bigdata-max-query-millis": "60000",
"content-type": "application/x-www-form-urlencoded",
"connection": "close",
"x-envoy-expected-rq-timeout-ms": "65000",
"x-forwarded-for": "2620:0:861:10a:10:64:131:24",
"user-agent": "curl/7.74.0",
"accept": "*/*"

Hitting lvs might require a mapping like https://query-main.wikidata.org -> https://wdqs-main.discovery.wmnet, which I believe could be possible using ServiceRegistry#addAlias( "https://wdqs-main.discovery.wmnet/sparql", "https://query-main.wikidata.org/sparql").
This could done by adapting the syntax of the allow-list to enable setting aliases:
service_url[,list of aliases] e.g. https://wdqs-main.discovery.wmnet/sparql,https://query-main.wikidata.org/sparql. The WikibaseContextListener#loadAllowlist could be adapted to support this syntax and and call addAlias() on the service registry.

Additionally we probably want to exclude *.wmnet hosts found in the allow list from org.wikidata.query.rdf.blazegraph.ProxiedHttpConnectionFactory.

Drawback is that hitting lvs from within the same lvs will hit localhost, this is not a problem because the lvs endpoint should be different in the context of the graph split but a malformed query federating the same lvs might possibly starve if the server is busy, I'm not sure that we have to worry about this or not... A query federating itself does not make much sense...

After discussing this Erik with we have a rough plan:

  • add a new lvs enpoint dedicated to internal federation and targeting a new port opened by nginx
  • add a new port in the nginx config for which we add the X-Disable-Throttling + x-bigdata-read-only to the request forwarded to blazegraph
  • use the blazegraph service alias feature to map https://query-main.wikidata.org/sparql -> https://wdqs-main.discovery.wmnet:$NEW_PORT/sparql
  • adapt ProxiedHttpConnectionFactory to allow the bypass of *.wmnet hostnames

Change #1047985 had a related patch set uploaded (by DCausse; author: DCausse):

[wikidata/query/rdf@master] Add http.proxyExcludedHost to exclude hosts from being proxied

https://gerrit.wikimedia.org/r/1047985

Change #1047986 had a related patch set uploaded (by DCausse; author: DCausse):

[wikidata/query/rdf@master] Add support for declaring service aliases

https://gerrit.wikimedia.org/r/1047986

Change #1048038 had a related patch set uploaded (by DCausse; author: DCausse):

[operations/puppet@production] [WIP] wdqs: allow to configure internal federated enpoints

https://gerrit.wikimedia.org/r/1048038

Change #1048485 had a related patch set uploaded (by DCausse; author: DCausse):

[operations/puppet@production] [DNM] wdqs: enable throttling only for requests coming from varnish

https://gerrit.wikimedia.org/r/1048485

as discussed on the meeting, you can rely on X-Client-IP header being present to tell between CDN requests and internal requests.

Of course you could use the source IP to validate it as well, the CDN source IPs are listed on hieradata/common.yaml under cache_hosts

Change #1047985 merged by jenkins-bot:

[wikidata/query/rdf@master] Add http.proxyExcludedHost to exclude hosts from being proxied

https://gerrit.wikimedia.org/r/1047985

Change #1047986 merged by jenkins-bot:

[wikidata/query/rdf@master] Add support for declaring service aliases

https://gerrit.wikimedia.org/r/1047986

@Vgutierrez thanks for the help!

to sum up what we discussed:

  • We can use the X-Client-IP to identify external requests
  • Internal federation requests should ideally not got back to lvs (it was pointed out that perhaps envoy could act as a load balancer for this)

tagging serviceops for help on envoy to see if it can be used as a load balancer to balance the internal requests made from one blazegraph cluster to another without using lvs.

tagging serviceops for help on envoy to see if it can be used as a load balancer to balance the internal requests made from one blazegraph cluster to another without using lvs.

Let's split the envoy vs LVS to another ticket, since it isn't strictly related to throttling: T368972

Removing ServiceOps from this ticket and adding them to T368972.

Change #1054387 had a related patch set uploaded (by Ebernhardson; author: Ebernhardson):

[wikidata/query/deploy@master] deploy version 0.3.144

https://gerrit.wikimedia.org/r/1054387

Change #1054387 merged by Ryan Kemper:

[wikidata/query/deploy@master] deploy version 0.3.144

https://gerrit.wikimedia.org/r/1054387

Change #1048038 merged by Ryan Kemper:

[operations/puppet@production] wdqs: allow to configure internal federated endpoints

https://gerrit.wikimedia.org/r/1048038

Change #1048485 merged by Ryan Kemper:

[operations/puppet@production] wdqs: enable throttling only for reqs from the CDN

https://gerrit.wikimedia.org/r/1048485

Change #1054393 had a related patch set uploaded (by Ryan Kemper; author: Ryan Kemper):

[operations/puppet@production] wdqs: map lines missing trailing ;

https://gerrit.wikimedia.org/r/1054393

Change #1054393 merged by Ryan Kemper:

[operations/puppet@production] wdqs: map lines missing trailing ;

https://gerrit.wikimedia.org/r/1054393

After merging the two patches (and the semicolon fix above) we were able to spot a query that had X-Disable-Throttling set as expected:

Accept: */*
Host: localhost
X-Real-IP: ::1
X-Forwarded-For: ::1
X-Forwarded-Proto: http
X-BIGDATA-MAX-QUERY-MILLIS: 60000
X-BIGDATA-READ-ONLY: yes
X-Disable-Throttling: 1
Connection: close
User-Agent: wmf-prometheus/prometheus-blazegraph-exporter (root@wikimedia.org)
Accept-Encoding: gzip, deflate

@dcausse Erik and I ran a test query of a federated req from main -> scholarly. Meanwhile we were on the scholarly host (wdqs1023) running a tcdump (sudo tcpdump -i lo -vvAls0 port 9999) and we did see the X-Disable-Throttling get set (see above comment). The only uncertainty is it wasn't clear if the query we were seeing the tcpdump of was the actual federated query we'd ran or if it was a monitoring query coming from prometheus.

When you get a chance can you see if everything looks good? For the timebeing we've verified the patches didn't break wdqs and also at least in some cases the disable-throttling header is being set so things are looking good thus far.

When you get a chance can you see if everything looks good? For the timebeing we've verified the patches didn't break wdqs and also at least in some cases the disable-throttling header is being set so things are looking good thus far.

I had to restart blazegraph for it to pickup the new allowlist but this seems to work as expected, when running a query from main to scholarly I can see the following headers:

Host: wdqs1023.eqiad.wmnet
X-Real-IP: 10.64.16.199
X-Forwarded-For: 10.64.16.238, 10.64.16.199
X-Forwarded-Proto: http
X-BIGDATA-MAX-QUERY-MILLIS: 60000
X-BIGDATA-READ-ONLY: yes
X-Disable-Throttling: 1
Connection: close
accept-encoding: gzip
user-agent: Wikidata Query Service (test); https://query.wikidata.org/
x-envoy-internal: true
x-request-id: 7654360b-3fde-4406-b06f-bdc2597a5ba8
x-envoy-expected-rq-timeout-ms: 65000

When running directly to scholarly from the CDN I see the usual headers, X-BIGDATA-READ-ONLY and X-BIGDATA-MAX-QUERY-MILLIS are there in both cases and X-Disable-Throttling only when doing internal federation (even when setting it from an external client).
Thanks!

Change #1057878 had a related patch set uploaded (by Ryan Kemper; author: DCausse):

[operations/puppet@production] wdqs: allow internal federation btw main&scholarly

https://gerrit.wikimedia.org/r/1057878

Change #1057878 merged by Ryan Kemper:

[operations/puppet@production] wdqs: allow internal federation btw main&scholarly

https://gerrit.wikimedia.org/r/1057878