Aug 12 2019
yes, it's the same and can be merged
Jul 4 2019
@Aklapper: can you assign this ticket please?
Mar 30 2019
- put Notifier and Veto to two routines
- merged to adt2.tcl
Mar 28 2019
Mar 27 2019
assign to TaxonBot
Mar 26 2019
Assigned to TaxonBota
Jan 10 2019
@Smalyshev What's the status to this task? There are still problems, -> https://www.wikidata.org/wiki/Wikidata:Request_a_query#SPARQL_query_result_erroneous
May 9 2018
@Bawolff I changed the botpassword about 10:45 UTC, May 9th. A login is successful now, but it was not successful on a first test with the old botpassword before.
May 8 2018
and how to get authorized for this link?
but first reduce the waiting time to login from 2 days to 0 minutes
okay, but prio High at least. The bot is even very important. ...
May 1 2018
Apr 30 2018
I think, the dev team can build a script to declare automatically such defective ID properties as dead links anyhow.
Mar 18 2018
Mar 14 2018
Feb 27 2018
?item wdt:P27 wd:Q183 was my mistake, these are the German women. I need the Swedish, sorry: ?item wdt:P27 wd:Q34
This is such a query that runs into a timeout. It should give all Swedish women that have no sitelink to dewiki with counting the sitelinks and listing some properties to each item, and is needed for a dewiki community project. I cannot limit it because of the needed double ORDER BY, so the query has to run with unlimited result. If there is a longer timeout, the query will give a successful result. It will be great, if you can optimize it, if possible. Can you calculate, how much time the query will take to complete?
Feb 26 2018
Maybe 120s limit? I think, we have to test it, but how?
Thank you, Magnus, this was my opinion, but my English is not so good like yours, so ...
This will always be the case, we will never be able to serve arbitrary requests that require unlimited time to perform.
Feb 24 2018
Ah! I didn't know about, thank you
Feb 23 2018
"The quantity of entities in Wikidata has risen very much.
Aug 17 2017
Jul 11 2017
Jul 1 2017
Jun 28 2017
Jun 14 2017
Apr 16 2017
IMHO it does not look like the same ...
Feb 19 2017
Hi! I could find this emoji too, yesterday, but it was very late, so I couldn't report it any more.
Feb 18 2017
@matmarex: take a look, there's nothing fixed, user:Unknown_user is back again: https://de.wikipedia.org/w/index.php?title=Benutzer:Delta456/Carcassonne_(Spiel)/Eigenst%C3%A4ndige_Spiele_2&action=history
Let me try the import once more to another target and we'll see.
@matmarex: I imported it by an exported XML file.
No, you're wrong. Sorry. I have made it myself, i know what I've done
Look the two case lines in the task description. These was done !before! the import! One bot flagged post edit and one emptying edit. The import case were the following revisions only
I reopened this task because there is something wrong: there was not a single import edit at all by User:Unknown_User but you mentioned this above. This user only made emptying edits, not any more at all. Please check the details of this case more in depth.
Please note: the first was !NOT! an import revision !!! emptied by User:Unknown_user too.
Jan 28 2017
Jan 26 2017
okay, the problem was the wikidata extension, thank you for any help
Jan 17 2017
Jan 9 2017
Conversation in freenode channel Jan 09th 2017:
Dec 5 2016
Nov 30 2016
- In a window of 2016-11-29 19:00 until 23:59 CET I couldn't receive any 502 Bad Gateway doing a lot of API queries using a permanent query loop.
Nov 29 2016
i'll try a monitoring with an api query traffic loop and will report here
as I can receive, it looks like the errors become more and more every day
Nov 26 2016
Oct 17 2016
I got the same problems again. I think the HHVM on API appservers has to be restart again due to memory leak.
Oct 7 2016
Firing with traffic (different API URLs) the error report occurs about every 1.5 minutes (!)
If those errors occur again and again, a technical check of these proxies has to be done, I suggest.
Hi, I think, a restart is needed again, there are too much 503 errors on several proxy servers like cp1053, cp1054 and cp1067.
Sep 23 2016
Top! Thank you very much!
Okay, I suppose, the problem has been solved. What have you done to solve it?
runs good for 8 minutes now
no, it's not consistently but random, it's always API info up to now, the parameter titles is different
next error trying this:
Who is responsible for that?
is the error related to the cache proxies, if there are reports of all the cp1065, cp 1053, cp 1055 ...?
from chat about the topic:
changed Priority because there have to run a lot of bot scripts Wikipedia users needs to work with it. The unbreak is open.