Page MenuHomePhabricator

Add a special page that creates subpage with given sql query
Closed, DeclinedPublic

Description

Hello. There are many special pages that work by some sql query and report every 72 hours.
It will be very nice if there can be an ability for sysops to have some Special:CreateQuery which takes as input fields fresh page name (for example, "abc") and sql code on quarry database and creates a new Special:abc that will work from now and on. Thank you.

Event Timeline

I'm not sure if this would really be possible to implement. It definitely wouldn't be easy. It would probably be easier to get your favorite queries turned into special pages "manually". :)

  • The labs databases that Quarry runs on are not really intended for arbitrary long-running queries. Quarry has some time limit for a reason, so that users can't overload the service.
  • We can't let any user run any query on the production databases, since that would allow leaks of all kinds of private data (hashes passwords, oversighted content, etc.) – this is all hidden from the labs mirrors.
  • Production servers can't even access labs servers in any way (as a protection against accidentally deploying something to production that depends on labs infrastructure, and as a general good security practice).

Hi, @matmarex.

  • But special pages do not run Quarry, do thay?
  • So, use _p databases only.
  • Sorry, I did not understand this bullet.

Just an historical reminder- In very old mediawiki versions, there was once a special page for running SQL queries. This doesn't exist anymore(?) and for good reason as matmarex explained

@eranroz is right; there was an arbitrary SQL service in very old versions in MediaWiki and it was taken out. In T137058 we discuss a slightly more restricted version of this and even that was not considered a good idea.

  • Sorry, I did not understand this bullet.
  • Production servers can't even access labs servers in any way (as a protection against accidentally deploying something to production that depends on labs infrastructure, and as a general good security practice).

This means that the wikis cannot run code that depends on a Labs service like Quarry. This is because the Labs environment and the environment where the wikis run, known as production, must be separate for security and performance reasons. This also rules out the use of _p databases, since those replicas are hosted on Labs. In the aforementioned T137058 we considered a similar approach involving taking a view of the data (compared to the _p views) and putting it into a production-grade analytics store, but I think that was ultimately not worth the effort.

I see, thank you. I close it as declined.
But I'll create now a new subtask, with something that looks different, but has the same purpose.