User Details
- User Since
- Jan 2 2023, 3:49 PM (68 w, 4 d)
- Availability
- Available
- LDAP User
- Frederik Ring
- MediaWiki User
- Unknown
Tue, Apr 23
Thu, Apr 18
Wed, Apr 17
An update of the Queryservice using allowlist.txt is now running in production.
Tue, Apr 16
@GreenReaper Thanks for the actionable report. I did indeed miss this change when updating the base image. PR with a fix is found here: https://github.com/wbstack/queryservice/pull/111
Pull Request for making API more robust regarding this kind of error: https://github.com/wbstack/api/pull/785
This had to be rolled back as it introduces a lag in the queryservice-updater
We also just tried using wmde.20 in staging, and while the Queryservice itself was working as intended, it introduced an inexplicable lag in our updater, so this has been rolled back by now.
Mon, Apr 15
Updating the QueryService to a newer version wmde.13 did not fix the issue.
There is review on all of the mentionend PRs, let me know if there are questions.
Wed, Apr 10
Does the hole in the sequence of IDs create any problems further down the line for you other than IDs not being as predictable as they could be?
Tue, Apr 9
Mon, Apr 8
Pull Request https://github.com/wbstack/api/pull/780
Thu, Apr 4
Trying to upload this following lexeme to my local wiki (using the API sandbox) resulted in the one off log shown above
Tue, Apr 2
Pull Request: https://github.com/wbstack/api/pull/776
Service should now be restored for all wikis but https://framenet-akkadian257.wikibase.cloud/
Seeing that processing these items made the Queryservice CPU max out again I decided to "re-fail" the offending items in order to restore the service
I partially undid the manual failing for the potentially offending wiki next
It seems some of the pending batches belonged to wikis that have since been deleted so I manually failed these to unclog the queue
Queryservice updater was scaled down to 0 at 11:08 Berlin time
Manually failed the batches using:
Mar 26 2024
Mar 21 2024
Stalling this as it needs further discussion / refinement
Mar 20 2024
After looking into this for longer than I expected I found, that while the official Helm Chart is straight forward to use, I ran into two problems where I don't have a good idea how to solve them yet:
Mar 19 2024
PR for moving this to production: https://github.com/wmde/wbaas-deploy/pull/1504
Mar 18 2024
Mar 14 2024
Alternative version looking for "empty" wikis:
Tested this locally and approved the PR.
Mar 13 2024
Mar 12 2024
Repository for the Docker image is found here: https://github.com/wbstack/transferbot
Command for doing the above using wikibase-cli
not sure what changed between then and now but i did all the same things
Mar 11 2024
Pull Requests (adressing the third AC only):
Mar 7 2024
Not sure why tbh. Are you populating the form fields with references to existing items?
I left a review on the PR https://github.com/wbstack/ui/pull/786#pullrequestreview-1921949431
Mar 6 2024
It seems this is indeed caused by the length of the payload, however blocking seems to be happening at Cloudflare level (which seems to be in front of TinyURL), which just returns 403 without further explanation.
This has been resolved in https://phabricator.wikimedia.org/T341797 as Laravel 10 deprecates the method
Tests are passing as of today
Duplicate of https://phabricator.wikimedia.org/T351412
This is long done by now
FWIW I currently cannot reproduce this on staging.
Feb 27 2024
Feb 26 2024
Had a look at this, and while the findings look correct to me as well, I also don't have a good explanation for what we have seen.
Feb 22 2024
Feb 21 2024
File has been downloaded and can be deleted again.