Software developer on the Wikidata team at Wikimedia Germany (he/him, Berlin timezone). Private account: @LucasWerkmeister.
Backport tested on test.wikidata.org – www.wikidata.org isn’t on wmf.18 yet but hopefully will be by the time the dumps start.
I think we can close this – otherwise it’ll be one of those tasks that stay open forever because there are no clear criteria for when to close them.
T244341 also includes the use of blank nodes for unknown values.
Fri, Nov 20
Alright, scheduled for Monday’s EU backport+config window – if I read the cron config correctly, the full RDF dumps start on Monday night (23:00 – UTC, I assume?), so if all goes well, next week’s dumps should already have this change.
Okay, I don’t know where these bad page IDs would be coming from. eu_page_id is only written from very few places:
Claiming this since I already started to look a bit more into it.
The stack trace looks like we’re getting the bad page ID from the database; EntityUsageTable::getPagesUsing() reads the eu_page_id from the wbc_entity_usage table, and then foldRowsIntoPageEntityUsages() passes the $pageId into the PageEntityUsages constructor, which complains if it’s not an int or less than one. (foldRowsIntoPageEntityUsages() casts it to int, so I think the < 1 is the only relevant part of that condition.)
WikibaseRepo::getDataValueDeserializer() looks relevant, that defines 'unknown' => UnknownValue::class as, seemingly, a deserializable data type.
Seems to happen a handful of times per week: https://logstash.wikimedia.org/goto/852228f37c434dacb72cec6ffe618ab8
Thu, Nov 19
I wonder if we should backport at least the main change to wmf.18? Otherwise, I believe it will only start showing up in the full RDF dumps of 7 December (since next week is a no-deploy week, and so the dumps of 30 November will still be on wmf.18, if I’m not mistaken).
I’m not the right person to decide that, sorry – I just saw a few reports of the few issue and thought it was worth mentioning here.
T267668: Some recent Commons uploads not available on other wikis (2020-11) seems to be cropping up again.
all done 🎉
(Side note about my earlier comments: @noarave determined that assertion failures in a browser.call() without .catch( assert.fail ) are still properly reported as failures, so that’s probably not the cause of the problem, and we could most likely remove the .catch( assert.fail ) snippets.)
Wed, Nov 18
Note: currently, the “update repo” handler is the UpdateRepoHookHandler class and it listens for the PageMoveComplete hook, and the “notice” handler is the MovePageNotice class and it listens for the SpecialMovepageAfterMove hook, but I avoided mentioning those names in the timeline because this moved around somewhat over time.
@Lydia_Pintscher I started looking into the message Wikibase shows the user after the page move is complete, but this led me down a terrible rabbit hole (T268135). Can I assume that updating that message is not part of the AC here, and we’ll eventually tackle that task separately? (I think this would mean that Wikibase would still tell the user that “your move should now be reflected in the item language link, we ask you to check” whether or not the sitelink was deleted.)
Tue, Nov 17
I notice there’s a slight difference between the first two tests in the file:
Seems like the last three failures have all been that login error:
T268012 and T268008 are two GrowthExperiments issues that are not fixed in wmf.18 as far as I’m aware. Both have fixes available, and for the former I just deployed the fix for wmf.16, but I’m not sure how to backport changes for not-yet-deployed trains (last time I tried, it didn’t quite work out), so I’ve left the wmf.18 branch alone for now. @kostajh can tell you more about whether those issues have any serious user impact or “just” logspam; depending on that, I expect you’ll decide whether to apply the backports to wmf.18 before the initial rollout, or afterwards.
I still don’t know what happened one year ago, but there are no occurrences of this in Logstash anymore, so we can’t really look further into it even if we wanted to. Let’s close this and open a new task if it ever happens again, I suppose.
Mon, Nov 16
Fri, Nov 13
Isn’t that what the SPARQL endpoint entry in the Link menu is?
I’m not sure if now is a good time to investigate this… Beta has been generally unstable in the past few days (T267561), so the most recent failures are most likely due to that. (And the latest build succeeded.)
Should be ready to go with wmf.18 (but not wmf.16).
I assume the Grafana panel in the screenshot in Elasticsearch Indexing / Saneitizer, but that graph now shows 0 ops since November 7th? (Except for a brief burst of activity on the 9th/10th, around midnight UTC.)
Scheduled for Monday EU window.
As far as I can tell, the needed code changes have been deployed since wmf.12(!), so I don’t think this needs to be stalled/waiting anymore.
It probably makes sense to wait with verifying this until next week, so that we verify not just the backported revert (wmf.16) but also the proper fix (wmf.18).
Thu, Nov 12
Other questions that I didn’t have time to ask in the story time (the first isn’t really a question but I’d like to have it confirmed anyways):
Since the above change doesn’t really expose the data + entity type definitions as services (they’re still stored as members of the WikibaseRepo instance), I wouldn’t consider this task done with only that change merged – we should also have some (simple) service that’s completely migrated to the service container (WikibaseRepo could still have a getter but it would get it from the MediaWikiServices). @Pablo-WMDE volunteered to attempt that, but in the meantime, let’s move this to Review for the first change.
I’m not sure what this task is about… it sounds like it might be covered by T244341: Stop using blank nodes for encoding SomeValue and OWL constraints in WDQS?
Wed, Nov 11
Waiting on T265898: Don’t create implicit description use if description is overridden locally and then the next train after that.
Dev note: as far as I could tell, the script doesn’t connect to the local wiki at all, so you don’t need production access to test it – just run it locally with --sparql https://query.wikidata.org/sparql and you should get a JSON file corresponding to Wikidata.
Tue, Nov 10
One random-ish example where this results in duplicate work:
I looked through Logstash mediawiki-errors; the old error message (UnresolvedRedirectException) hasn’t been seen since November 1st, and the new log message (“Unresolved redirect encountered”) isn’t in mediawiki-errors at all because it’s an info-level message. I think we’re done here.
Should be fixed now (T266671#6615909).
Seems to be fixed according to Logstash. Thanks Thiemo!
Mon, Nov 9
Well, we use the package in the Gruntfile to launch the Selenium server automatically. (Probably not the only way in which the Selenium setup here differs from other repositories.)
As far as I’m aware, nothing significant has happened towards unstalling this task.
Sun, Nov 8
Note that in the diff case, the presence of item links in the edit summaries seems to be sufficient to “summon” the relevant CSS, so that the indicators are hidden in https://www.wikidata.org/w/index.php?diff=1207519515&uselang=en-gb&diffonly=yes.
Fri, Nov 6
I tried updating some browser test related packages but it didn’t help at all. (Pasting only the package.json diff for brevity, use npm install to update the lockfile too.)
Another option to keep an eye on is QLever. It doesn’t support SPARQL Update yet (and while the stated Wikidata reload time of less than 24 hours is impressive, it’s not enough to replace live updates, especially since I believe it takes us more than 24 hours to produce an RDF dump anyways), but I’m told that update support is being worked on.
I just tried it out and got a proper success message (“Revision visibility updated.”) on both Test and real Wikidata. (Unless you mean a different kind of revision deletion?)
Thu, Nov 5
Update: the tests still don’t work properly in CI, but they’ll now print this message (mainly pasting it here so it can be found via search):