Currently, WDQS responses are sent with Transfer-encoding: chunked which prevents varnish from caching the results. We should find a way to make these results cacheable at least for smaller query results.
Description
Description
Status | Subtype | Assigned | Task | ||
---|---|---|---|---|---|
Resolved | Yurik | T126741 Add support for the wikidata's Sparql queries to graphs | |||
Resolved | Smalyshev | T126730 [RFC] Caching for results of wikidata Sparql queries | |||
Resolved | Smalyshev | T128656 Make WDQS responses cacheable by eliminating chunking |
Event Timeline
Comment Actions
According to @BBlack it may be a problem peculiar to our varnish config, which then will be solved with upgrade to Varnish 4, planned sometime in Q4. With this, I think the best course of action, unless we find an easy solution soon, would be to just wait for that.
OTOH, it looks like RestBase has caching with chunked, so maybe there's still something missing here.
Comment Actions
Turns out the problem with chunking was actually a config error in misc cluster:
sub misc_fetch_large_objects { // Stream objects >= 1MB in size if (std.integer(beresp.http.Content-Length, 1048576) >= 1048576 || beresp.http.Content-Length ~ "^[0-9]{8}") { set beresp.do_stream = true; // hit_for_pass on objects >= 10MB in size (no effect on backends that always (pass) anyways) if (std.integer(beresp.http.Content-Length, 10485760) >= 10485760 || beresp.http.Content-Length ~ "^[0-9]{9}") { return (hit_for_pass); } } }
This assigns length of 10M to any request that has no Content-Length and then decides not to cache it.