User Details
- User Since
- Oct 23 2014, 3:02 PM (267 w, 3 d)
- Availability
- Available
- LDAP User
- Magnus Manske
- MediaWiki User
- Unknown
Fri, Dec 6
Fixed now.
Thu, Dec 5
Wed, Dec 4
Done.
Tue, Dec 3
This should get them all:
on it
Tue, Nov 12
To add another use case (and to ping the issue):
Sep 25 2019
deleted access and error log files
Sep 24 2019
As of today, QuickStatements supports MediaInfo items (Mxxx).
For now, you'll have to supply the IDs manually, which is a pain.
I am working on a QS syntax parser in Rust, which will support
- ranks
- page/filename => ID conversion on-the-fly
This will require some more testing
Sep 17 2019
Sep 16 2019
Bot code patched, deployed, someone please test
+1
That's true, but a reload of the batch page should return the STOP button, as its state is only read from the database. The bot, in turn, only checks the database (or should, I suspect it doesn't).
Sep 8 2019
Another idea came to me:
What is it's not just "page lists", but any (general, of one of pre-defined types) tables?
One table type would be "page title/page namespace", giving us the above lists.
Others could be, say, Mix'n'match catalogs ("external ID/url/name/description/instance of").
Sep 6 2019
Sep 5 2019
Started some design notes of such a product: https://meta.wikimedia.org/wiki/Gulp
OK, some initial thoughts and remarks on this:
- I have actually rewritten Listeria in Rust, to use the Commons Data: namespace (aka .tab files) to store the lists, and use Lua to display them.
- I think the Commons Data: namespace would technically work for a generalized "list storage", thugh it seems to be a bit of abandonware (will this feature be long-term supported by the WMF?)
- Commons Data: namespace, if supported, would also have the proper scaling, caching etc. that PagePile is lacking
- It should, in principle, be possible to change PagePile to write new piles to the Commons Data: namespace, and return queries from there. That would give the new list storage a running start. We can replace PagePile later.
- Drawbacks of Commons Data: namespace are (a) cell size limit (400 characters, so should work for simple page lists), and (b) total page size (thus limiting the max list length)
- If Labs were to offer a scalable, backed-up object store for tools, that might be better suited for general list management
- Much of the "average Wikimedian" integration will have to come from (user-supplied) JavaScript, such as "snapshot this category tree" or something. I doubt waiting for WMF would be a timely solution.
- Short term, we (I?) could write a slim web API on Labs that abstracts the implementation away, offering a to-be-discussed set of functions (create/amend/remove list etc). Initially, this could run on PagePile in the background, or Commons Data: namespace, or even both (large lists go to pagepile, short ones into a MySQL database or Commons Data: namespace, etc.)
I believe I fixed the issue in the Rust bot. I had a successful test, but please try it yourself.
Sep 4 2019
Actually, that bitbucket repo is for the _really old version_ (pre-1.0).
Sep 2 2019
Added it for most of my tools, centrally. Works fine for distributed-game. for wdfist I get:
E1:The tag "wdfist" is not allowed to be manually applied
Now rolling the change back, until I know what tags I am allowed to use where and when.
Jul 24 2019
Because toolforge forgot the replica.conf again, see T166949. Webservice restarted, manually, yet again, works. For the next few minutes, probably.
Jul 2 2019
Everyone, I own Reasonator, including the experimental version 2 which is used here (and should be better suited than the dated V1).
Jun 27 2019
Jun 26 2019
Jun 25 2019
@Jdforrester-WMF Is that an official design decision (claims=>statements)? Where was this fundamentally breaking change announced to the public?
Jun 22 2019
FWIW, I have already changed my code to work with either claims or statements. Quick thoughts:
Jun 21 2019
On another note, the Reasonator example in my original post seems to load now. I'll check if the Rust code works as well now.
Jun 20 2019
May I humbly suggest to have a look at the consistent 2min response time of the p99 server (in grafana), before deciding it's a problem outside WMFs control, no matter how convenient that may seem?
Jun 19 2019
No, sorry, issue remains.
GET /w/api.php?callback=jQuery21303406678877236998_1560936691744&action=wbgetentities&ids=P2508%7CP2631%7CP2509%7CP4276%7CP272%7CP4529%7CP5032%7CP4947%7CP5786%7CP6145%7CP1609%7CP1230%7CP2896%7CP4730%7CP2093%7CP1844%7CP1813%7CP5396%7CQ1199348%7CP435%7CP3959%7CP747%7CP1274%7CP1085%7CP5331%7CP4839%7CP4969%7CP103%7CQ49088%7CP1648%7CQ19045189%7CP3793%7CP2847%7CP3035%7CP4389%7CP5062%7CP5508%7CP4264%7CP6698%7CP6617%7CP2241%7CQ44374960%7CQ4644021%7CQ839097%7CP1268%7CQ9624%7CQ8055775%7CQ210152%7CQ4642661%7CQ635616&props=info%7Caliases%7Clabels%7Cdescriptions%7Cclaims%7Csitelinks%7Cdatatype&format=json&_=1560936691745 HTTP/1.1 Host: www.wikidata.org User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0 Accept: */* Accept-Language: en-GB,en;q=0.7,de;q=0.3 Accept-Encoding: gzip, deflate, br Referer: https://tools.wmflabs.org/reasonator/?q=Q350 DNT: 1 Connection: keep-alive
Cookies redacted
Timing details of that slow request:
Response header from one of the slow requests:
HTTP/2.0 200 OK date: Wed, 19 Jun 2019 09:31:43 GMT content-type: text/javascript; charset=utf-8 server: mw1341.eqiad.wmnet x-powered-by: HHVM/3.18.6-dev mediawiki-login-suppressed: true cache-control: private, must-revalidate, max-age=0 content-disposition: inline; filename=api-result.js x-content-type-options: nosniff x-frame-options: DENY backend-timing: D=1129973 t=1560936702635082 vary: Accept-Encoding,Treat-as-Untrusted,X-Forwarded-Proto,Cookie,Authorization,X-Seven content-encoding: gzip x-varnish: 774984440, 505592114, 724292664 via: 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1) accept-ranges: bytes age: 0 x-cache: cp1081 pass, cp3032 pass, cp3041 pass x-cache-status: pass server-timing: cache;desc="pass" strict-transport-security: max-age=106384710; includeSubDomains; preload x-analytics: ns=-1;special=Badtitle;loggedIn=1;WMF-Last-Access=19-Jun-2019;WMF-Last-Access-Global=19-Jun-2019;https=1 x-client-ip: 2001:630:206:6204:cc46:3ce1:27e1:3062 X-Firefox-Spdy: h2
Jun 6 2019
May 31 2019
May 23 2019
Done.
May 14 2019
Never mind, it's the multilingual string!
Mar 29 2019
Mar 8 2019
Happened to me as well, yesterday (2019-03-08, 08:23UTC)
Mar 7 2019
Fixed Listeria.
Don't know anything about ASammourBot.
Mar 5 2019
Feb 28 2019
Removal is running.
Update: Will remove them with QuickStatements now
So here is what happens: I create(d) lots of gene/protein items (example) for various species. For many statements, I can create references, as I get them from the upstream source. That paper is one of the often-cited ones, about a determination method.
Feb 27 2019
Feb 19 2019
I have added euwiki to the list of wikis where the bot flag is to be used.
Feb 12 2019
That did the trick, thanks!
Feb 11 2019
Tried that, also on login.tools.wmflabs.org (just to be sure). Both say "webservice is not running". Still won't start kubernetes.
Thanks, I have rebuild and updated via npm on the kubernetes shell.
Feb 9 2019
I have run npm update in the kubernetes shell, but no joy.
Feb 8 2019
Feb 1 2019
Try it now...
Testing on dev now. Looks like frequent Lost connection to MySQL server during query errors from the DB replicas.
Also, I just clicked on the two examples. They took 133 and 173 seconds, and returned 50 and 1 results, respectively. No 502s, though I have seen those occasionally.
OK, so what appears to happen is that SQL queries timeout and take PetScan with them. Note:
- I wrote some code that re-arranges certain large queries into smaller ones, which cuts down on the timeouts; that code has been live for weeks
- That works fine on the dev machine but not reliably on the production machine
- The dev machine has less resources than production, but is otherwise identical (OS etc)
As this does not fail reproducibly, it's either some odd bug in my code, or some situation on the DB replicas.
Jan 21 2019
Jan 17 2019
Looks like it's back to normal at the moment.
Jan 14 2019
Dec 4 2018
Nov 30 2018
Nov 9 2018
I have repeatedly asked if there is a "max concurrency" setting for jsub, as there is for other grid engines. I would consider it rather silly to force every user with such jobs to implement that on their own. For example, I could record it in the database when a job starts, but how do I know it is still running, and hasn't failed? Guess based on last action time? It makes vastly more sense to do that kind of thing in the job scheduler.
Nov 8 2018
Checked out sourcemd. That is actually correct; one job for each of the "TODO" ones here:
https://tools.wmflabs.org/sourcemd/?action=batches
Looks like the grid caught up with them. One job type seems to be problematic, not sure if it's an aftereffect.
Using mix-n-match as an example, two of the cronjob commands:
0,5,10,15,20,25,30,35,40,45,50,55 * * * * cd /data/project/mix-n-match ; /usr/bin/jsub -quiet -mem 8g -N as_import -cwd ./autoscrape_import_bot.php 4,14,24,34,44,54 * * * * cd /data/project/mix-n-match ; /usr/bin/jsub -quiet -mem 6g -cwd -N mnm-microsync ./microsync.php random
Yes, they run often, but usually not for long, and have never had such an issue in the years before.
Up to 95 "qw" now...
Nov 7 2018
(I picked mix-n-match as an example. Other tools are affected as well)
Nov 2 2018
In many cases, especially bot/background tasks (e.g. Listeria), a lag of hours is not critical. This is also true for many interactive tools, where the user gets some items matching certain criteria.
Nov 1 2018
I have removed unused files /shared/dumps and altered the update code to only keep recent ones around.