Tue, Sep 10
Mon, Sep 9
Looks like we tried to fix this at some point but it got lost in the backlog of things.
Sun, Sep 8
I think this and T208493 are essentially dupes of each other.
What config was that updater running with?
Why is it getting subjects with localhost referenced like http://localhost:8181/entity/Q2 ?
I just checked again and I still get redirected to a 5** error, now a 502, on the URL as described in this ticket :/
Tagged the query service as this is a usecase for third party wikis that may want a little bit of thought.
Hmm, that is indeed correct, the query service updater needs read access.
Tue, Sep 3
I'm guessing this shouldn't still be subscribed to @hoo
Thu, Aug 29
Wed, Aug 28
There is only a single point right now (per metric) and it might have dropped off the time range by now! But it is there :)
In grafana you might find it doesn't draw a line when looking at a time range including the point.
Tue, Aug 27
Looks like the data has landed.
I just hit +2 on the patch.
It runs weekly, which is "every sunday at 01 hours".
So let's check back next week!
I see the following options, not sure which is the best one or if there is some better alternativechange the file directly in the docker image do the changes persist on restarting?
I guess this mainly applies to the wdqs frontend?
I would just keep it simple for now and mark it as unstable and only for use with the bridge.
Just include what you need there, and call it something like wbbridgesettings for now?
I can defiantly see a need for more things like this in the future but it probably isn't worth the time thinking too deeply about those cases yet.
Sounds like the right way for this would be JS -> graphite via statsv and statsd and then just displaying the data in grafana.
https://wikitech.wikimedia.org/wiki/Graphite#statsv included the needed infomation using mw.track in JS.
tear down the services with docker-compose down then boot them up again docker-compose up -d. note: this is seems to be required for wdqs-updater if it was run first time with a wikibase instance that had no entities yet.. so in case you had already the bundle setup and wdqs-updater running properly you might not need this step.
The URI returned by the query service is all down to the URI written to the query service during updating.
This is set using the WIKIBASE_HOST env var for the updater service / wherever the update runs.
Tue, Aug 20
At any given point, the latest image might be broken.
Aug 19 2019
Amazing, would it be possible to get this bakcported to 0.3.1?
Or should I just back it into the docker images ? :)
Aug 12 2019
There are a couple of situations that I remember where some validation needs to take place first in order to know what permissions to check.
This mainly applies to api modules that act on multiple entity types where entity type specific permissions also exist.
As for the points #1,2,3 in the description I agree with the order there.
@alaa_wmde feel like a campsite candidate to you?
I'm very pro us having a cleaner structure for our DB updates, but rather than spend time on this now we should probably see how T191231 finishes evolving and if any standards come out of that.
@Ladsgroup does the MobileFrontend skin use minification in webpack, or leave it down the RL?
My main though here is it would be nice to not have minified things with debug=true?
Aug 10 2019
Aug 9 2019
Yup, it would also make sense to conditionally load those.
It's a shame we have cases of self redirects already in history that we have to try to deal with.
Sounds like we should:
- Leave this ticket open
- Move the NS servers for wikiba.se back to the WMDE controlled ones
- Remove the DNS stuff I added to wmf stuff for wikibase.se
- Setup letsencrypt or something similar for wikiba.se as controlled by wmde
- Close this ticket.
It sounds like we should just close this ticket as Declined then @WMDE-leszek ?
The estimation comes down to, do we:
- Just more the dispatching mechanism over to jobs, using the same or very similar logic that we currently have.
- Probably not the most efficient dispatching logic
- Least effort to get there
- Solves most of the issues outlined in T48643#5336132
- Change the way dispatching works while we move over to jobs
- More work & time
- Probably makes dispatching faster, more efficient, more reliable and easier to understand
- Best use of job queue for dispatching
- Also solves the issues outlined in T48643#5336132
So, the last state that this was in was:
I believe this one was deployed with:
Just checking back on this ticket.
Is this still happening?
Aug 8 2019
Looking at the code it looks like indeed either a new TermIndex type thing would be needed for media info, or the fetching of terms, as currently done in https://github.com/wikimedia/mediawiki-extensions-Wikibase/blob/84e2062770467eacbb42e8a55bdf77e11141834f/lib/includes/Store/Sql/TermSqlIndex.php#L638-L686, would need to be factored out in some way.
My hunch would be that this has to do with the CA cookie domain for things like wikipedia being set to ".wikipedia.org" but the domain for wikidata gets set to "www.wikidata.org".
Also going to tag MediaWiki-Authentication-and-authorization as it could be related and likely to catch some more eyes.
@ArielGlenn do I understand correctly however that this revision on wikidata.org is not an issue for dumps?
Aug 7 2019
Jul 30 2019
You could include the same logic checking shouldRenderTermbox and checking the current key / options for pcached content in the parsercacherejectvalur hook.
This would allow you to reject values where you want to use SSR and the caged Verizon doesn't already have an SSR term box, then re parsing and using a key with the new value.
Indeed we are only talking about the mobile term box right now.
Looking at the 2 Gerrit changes I think something like #1 would be best (I haven't seen the hook in #2 used at all anywhere).
Although #1 could en changed to register a new parser option called wb-termbox for example, default value null, used in cache key and when termbox is rendered set it to 2.
Jul 29 2019
It sounds like when old parser cache entries are encountered the old behaviour should continue to happen, and the new SSR service should not be called.
Otherwise turning on the feature will essentially purge / reject / ignore all previously cached pages, which is not a best situation.
Jul 27 2019
Jul 25 2019
Jul 22 2019
Fine with me!
Jul 21 2019
Switched to weekly in the latest PR and also fixed the missing ;.
If you give it a +1 I'll go ahead and get it running in the coming days.
I like the changes. The page is much easier to follow now due to its shorter length, and also we no longer need to try and keep the same docs up to date in 2 places!
Also why captions even matter for T223792 - shouldn't getEntity look up by ID (which in case of MediaInfo is straight page id) not by caption?
Indeed, port 80 is hardcoded.
Jul 18 2019
If elastic already has captions indexed then yes this could probably just use elastic. (There would be slightly more delay and potential unpredictability using elastic than some mysql tables like wb_terms).
Note to researcher.
The normal way of this happening would be via the dispatch system.
I imagine whoever does this research will need to look into that a fair bit! :)
I don't think wb_terms should be used at all for media info.
A custom system or new table or set of tables should be decided on, created, and populated.
Temporarily using wb_terms just to remove it would cause more trouble that it is worth and could just lead to it still existing in a few years and us having to do another massive migration.
Not sure what we would gain by adding a panel per entity type.
Not sure if this is done or not, the estimation should perhaps take that into account.