Concatenation is not particularly hard after or before the PDF conversion, either. PDF outlines could also be added by a separate tool (although it is a bit of a pain). Adding page numbers to the TOC is not possible without dedicated functionality in the converter tool though (target-counter() from CSS3 Generated Content for Paged Media could do it but it is not supported by any browser at this time).
Relevant patchsets which were not tagged with this task:
All merged except [https://gerrit.wikimedia.org/r/#/c/349092/ DonationInterface] which is related to fundraising and those extensions have their own release cadence. We can call this done.
Mon, May 22
Note that the bug description basically amounts to "central login is not working". (Which is fair; CentralAuth is super complicated, and finding out exactly what is failing is probably the bigger part of the work. But saying "wiki X is also affected by this" as if there was a single issue affecting all WMF and non-WMF wikis, and everything could be fixed with the same debugging effort, is potentially misleading.) I don't think this task will move forward unless there is a wiki owner who can reliably reproduce the issue and is willing to do a serious amount of debugging.
So in general the order of things is:
- extensions which do not use extension registration add their MediaWikiServices hooks
- loadFromQueue runs, container gets created, those hooks run
- extensions which use extension registration add their hooks
- resetGlobalServices runs, all hooks run but those redefining a service have no effect since old service definitions override new ones.
Also, in theory some extension entry point could get the container singleton and thus trigger hook execution, somewhere halfway in the process of LocalSettings.php executing the non-extension-registration entry points.
Sun, May 21
MediaWikiServices::importWiring does what it says in the docs. I think the problem is that the first version of the container is set up in ExtensionRegistry::loadFromQueue before hooks have been exported. MediaWikiServices::resetGlobalServices then replaces it with a new object, and even though the MediaWikiServices hook does get called for that new object, the updated service definition gets overwritten by the new one that is copied over from the early container.
It does work, but it's not valid per the spec. We can ignore that, but is there a good reason to? In wikilinks space and underscore is interchangeable so users might even expect that to continue.
Sat, May 20
The "html5" flavour means that the ID is completely unaltered.
Fri, May 19
If the error is with the token specifically (and I have not verified that), then session loss is the most likely cause (since tokens are stored in the session). Feel free to ping me if you can reproduce the error and need help with debugging.
So yes, your DNS server refuses to look up the IP belonging to upload.wikimedia.org. That's either a very severe misconfiguration, or (more likely) intentional filtering. Either way, you will have to take it up with the local sysadmins.
Thu, May 18
That sounds like you are behind a firewall that messes with DNS lookups. Can you check what is output by the command nslookup upload.wikimedia.org?
Wed, May 17
Does not require another service to be created, can be used on third-party wikis with no support for node services.
Re: performance, do we expect concatenated HTML to be exposed directly to users in some use cases? Do we expect HTML concatenation to be slower than or comparable to HTML -> PDF transform? If we expect neither then choosing the concatenation tool based on performance is probably not a useful optimization.
Tue, May 16
Thanks for the feedback @jcrespo! I think it will be simpler if we go through the RfC first (so we get clarity in the MediaWiki vs RESTBase question) and then Reading management can choose between reducing scale or planning the new servers into the budget.
(credits go to @NHarateh_WMF for spotting that link)
The announcement says
This extension does not give you notifications when somebody successfully logs into your account from an unknown device or IP. It is technically possible to generate those, but if somebody else has logged in, they could just as easily see those notifications and do a password reset (which the notification encourages you to do). The ideal way to handle this is to issue email notifications for this case, but since most Wikipedia accounts do not have emails associated with them, this wouldn't be useful to majority of the users. So for the time being, we have settled for not issuing these notifications.
The text I see is
(Logs) . . <time> . . <username> (talk | contribs | block) was created IP: ... (Logs) . . <time> . . <username> (talk | contribs | block) User account <IP> was created IP: ...
The "was created" line (checkuser-create-action) is from the LocalUserCreated hook. The other line (logentry-newusers-create) is from the RecentChange_save hook and created via LogFormatter::newFromRow( $rc->getAttributes() ) from the recentchanges entry. The RC row has the correct user ID and name. The CU row ends up with the IP instead of the username.
Mon, May 15
Session loss, maybe?
Thanks! @Mholloway any thoughts?
Per the watch API doc you need A "watch" token retrieved from action=query&meta=tokens. So the patch above was wrong.
https://gerrit.wikimedia.org/r/#/c/353566/ still has to be merged, right? And possibly the mobileview API cache needs to be purged or split as well.
I meant global watchlists. I assume that has to involve the recentchanges table somehow, since you need a join between those two tables to display a Special:Watchlist-style change list.
Come to think of it, MySQL access in RESTBase will be necessary anyway for push notifications, right?
@jcrespo FWIW there is another related project with cross-wiki storage needs: T163116: Decide on persistence backend and location for the Push Notification Service
(denormalized for the benefit of the /pages/ route) - int or bigint 4 or 8 bytes (not sure what you mean with denormalized, please clarify)
Would it possible to keep the OWE accessible somehow? NWE has enough rough edges that I need to switch back all the time (especially on wikis with Translate).
That seems excessive. The pagelinks table on enwiki alone has 1B rows. There are only 30M enwiki users, most of them inactive, most of the active ones probably won't use this feature, and if Android users are any indication, there will be <100 list entries per user.
Sun, May 14
Sat, May 13
Typo, yes. The link in T165146 still does not work for me.
Fri, May 12
Not sure if I understand how ParserCache works but it seems like ApiMobileView::getData calling ParserCache::getKey will result in this change getting ignored (at least for the duration of mobileview API result caching) as that seems to only take into account the parser options which had a non-default value when the page was last saved, so updating the wrapper parser option inside the API does not break the cache.
All that seems to be happening here is that, originally, the section transformation was made inside the <body> tag, then at the end MobileFormatter::getText() unwrapped it. d154fa36b5c3
changed the transformation to happen inside <div class="mw-parser-output"> instead but nothing removes that element. MobileFormatter::getText() should just be updated to do it.
Here is a minimal reproduction of the issue:
tgr@terbium:~$ mwscript eval.php --wiki=enwiki
Action API responses are not cached unless you use the smaxage parameter. (In general you can avoid Varnish by using the X-Wikimedia-Debug header; there are browser plugins to do that.)
Note that the source edit tab still links to ?veaction=sourceedit but that parameter does not do anything now.
On the other hand action=edit does load it, which did not happen in the past... (which makes the old editor inaccessible, which forces me to opt out of beta since NWE is too bug to always rely on, especially on translated pages).
TBH this feels like a waste of manpower to me. hooks.txt is easily parsable so this can be automated (see T155029 for plans), we have plenty of other tasks which do require manual effort.
Thu, May 11
Clients should also be prepared to handle thumbnail URLs which refer to some non-current version of the image (see also T66214#3256693). Presumably that's not happening right now due to this bug.
We'll also need a way to display old versions of images. Clients can encounter old versions without expecting to due to FlaggedRevs hiding unreviewed image changes.
Note that users should be able to navigate to both versions of the article, it's just that anons vs. logged-in users should get a different version by default.
@jcrespo could you give the Data storage section a look?
There are, very roughly, two mechanisms parties communicating via HTTP use for caching. The server can tell the client how long the reponse is valid, and then the client can just skip further requests for that time, and use a local copy of the data instead. That is maximally effective (there is no remote communication at all) but the server must know in advance that the data will not change. Alternatively, the client (which has a local copy of past data but no guarantee that it is still valid) can tell the server what version of the data it has, and the server can then either respond with "that is still good" (and skip processing and keep data transfer minimal) or send a more recent version of the data.
Wed, May 10
Thanks. The first option seems simple enough, and avoiding premature optimization is good advice. I am going with that for now; we can discuss more in the RfC (T164990).