Page MenuHomePhabricator

Provide mechanism(s) to avoid WDQS timeouts for certain pre-approved queries
Closed, DuplicatePublic


We are exploring the use of Wikidata queries in disaster response, e.g. to get things like total population in the area, number and location of key infrastructure (e.g. hospitals or bridges) or monuments etc. None of the queries we have so far is ripe for prime time yet (we're testing them on current disasters like the Mexico earthquake) but some of them will likely get there soon.

In this context, it would be necessary to have mechanisms that would ensure such queries actually do run through when a relevant disaster strikes in the future, so that the relevant information can be provided and/ or improved as necessary. In other words, we would have to have a way (or possibly even several, since one emergency system may not be enough) to exempt such queries from the timeout, but in order to avoid ineffective uses of our query resources, we would probably have to limit this exemption to certain pre-approved queries (and come up with a mechanism for approval, but that is probably another discussion).

Such a feature may be useful well beyond disaster contexts. A good number of queries - including some of the official examples - also time out, which is annoying to newbies or when you are demoing Wikidata. But I think it makes sense to discuss such exemptions first for disaster contexts, and once we have things going well there, we can see whether and how to extend it.

Event Timeline

Restricted Application added a subscriber: Aklapper. · View Herald Transcript

Blazegraph has mechanism of stored queries, however what is not entirely clear for me is how abuse prevention would work in such case. I.e., let's assume we have a heavy query, and we have found a way to run it past common limits. What would happen if somebody, by mistake or out of malice, runs it 100 times? This may take down the whole service, at least temporarily. We need some way to prevent this from happening.

Technically, exemptions are possible. But I am not sure how to manage them properly yet.

Here's a suggestion in terms of mechanisms to avoid malicious re-runs. I'm not sure how practical it is, but perhaps it helps us move forward

  • A dedicated mechanism (could be an on-wiki user group flag or an off-wiki page) keeps a list of pre-approved accounts that have the right to trigger pre-approved emergency queries to be run
  • The actual triggering would be done by edits to an on-wiki page: a dedicated bot would
    • listen to that page,
    • check that the requesting account has pre-approved status,
    • check whether the query has already been run within a pre-defined time interval
      • if not: run the query once in that pre-defined time interval
      • if yes: point to the relevant results page
        • if the query still needs to be re-run before the time interval is over, then the query would have to be resubmitted by another pre-approved user, with some way to indicate that this is a conscious re-run
    • post the query results on some static page and in a way (e.g. as JSON or CSV) that allows them to be harvested by other tools

This mechanism does not exclude all possibilities of erroneous runs, but would probably limit them significantly, especially if it were complemented by a query canceling mechanism as discussed in T136479.

The wiki part probably requires Wikidata community to develop it.

trigger pre-approved emergency queries to be run

If this is accessible to select people who know what they are doing, I'd be fine with having arbitrary queries too. There probably still would be limits (since memory is finite and so is server capacity) but they can be more generous than regular query.

post the query results on some static page and in a way (e.g. as JSON or CSV)

We have tabular data format, it is consumable by Lua and easy to make work with SPARQL output.

complemented by a query canceling mechanism

If the bot initiates the query, it may be possible to cancel too, though depending on where the bot runs, it may be as hard as for regular GUI. That needs to be considered.