Sun, May 23
I thought it might be helpful to the discussion to provide a link to the template documentation: https://en.uesp.net/wiki/UESPWiki:MetaTemplate
Sat, May 22
Sorry, I had my extensions mixed up. I've been meaning "Semantic" whenever I said "Scribunto", though both are something we may end up using at some point. I corrected it in my latest message, but wasn't going to go back and edit every message I might've said it in.
The current version of the extension was written by a hobby programmer (albeit one who had remarkable insight into how the MW parser works) back between MW 1.10 through 1.15 or so, and has been hacked only enough to get it working as the years have gone by. One of the main modules is actually written entirely in global space! So, probably not something we'd want to release as is, even if we could. More importantly, however, It's her code, so not really ours to decide what to do with. We do have the code publicly available if you want to have a look at it, but not "public" public, as in intended for others to use. https://github.com/uesp/uesp-wikimetatemplate
TemplateSandbox, as I understand it, still requires the sandboxed templates be saved each time you make a change. What we're doing is injecting values during Show Preview and as the template code gets processed, so you can literally just preview/fix/preview/fix until you're sure the template is working correctly. Once it is, you can save it without having to copy anything anywhere, cuz you're working on the normal template page itself. This is much faster and more convenient than any kind of sandboxed process I've used on other wikis.
Fri, May 21
May 10 2021
Re-reading, I realize now that my initial comment came out much more acerbic than I intended, and undoubtedly set the wrong tone for my later comments, so I apologize for that. I also picked up this comment, which I'd somehow missed the first time around.
Yup, here it is: https://github.com/uesp/uesp-wikimetatemplate
I'm not trying to make "charged statements", and I apologize if I'm coming across that way. I'm just presenting my personal view and the views from a few other wikis that I've dealt with. I'm not trying to say it's representative of anyone else, and I'm not trying to be contentious. I'm just saying that not everybody's at the same place you are.
I wasn't trying to be difficult or insulting; I was just presenting the reality that I, personally, have seen. As a user of mostly small- to medium-sized wikis, most of which lag a fair bit behind the current version, I think my view of things is very different from yours where, as you say, you're supporting primarily large wikis with very different needs. All I was trying to say, really, was that while we may not have the page count/page views of the larger wikis, there is nevertheless a set of wikis that will prefer the legacy parser for one reason or the other. It may not even be for technical reasons such as ours, it might be simply a matter of processing power, preference, or whatever else. As I said in the beginning, and as has been on display throughout this thread, the attitude here is clearly "this is the direction we're going, get on board", and from the perspective of these smaller wikis that are behind by several versions, that's practically an overnight shift and it comes as a slap in the face.
Thank you for the explanation and the offer to engage with our requirements. Perhaps as the migration to Parsoid continues, better solutions will present themselves, but for us, right now the reality is that we've got 1700+ templates affecting over 75k content pages (not to mention those that affect non-content pages, like talk and redirect pages, bringing us to 300k pages overall). Having had our custom extension in place for 12 years now, most of our templates rely on it at this point, and we have only a handful of template coders to maintain them. So, hopefully you can understand that migrating to something that will essentially break all of that isn't just a pain point for us, it's simply not a viable option. While I would hope that this isn't the case, the reality may well have to be that we stop upgrading at whatever the latest version is that will support our needs.
We're not interested in migrating to Parsoid, nor were we aware that this was to become integrated rather than simply yet another extension, so wikis like ours likely ignored any calls for feedback, if they were aware of them at all. I certainly don't recall seeing anything about it in the few things I pay attention to/mailing lists I'm subscribed to, but that could well be what I just mentioned...the assumption that this was an optional component.
I'm not angry, really, so much as I see this sort of WMF-centric thinking from the developers often, and I think there needs to be some better feedback mechanism than simply trusting wfDeprecated() and the like to tell the developers what's in use and what's not. The reality outside of WMF wikis is that most lag several versions behind the current. Just browsing around, I easily found wikis between 1.25 and 1.33; I found none at 1.34 or above. So, deprecating something in 1.34 and then removing it in 1.35 or 1.36 because nobody complained or was logged as using the feature is not really a good plan for wikis like these. I think, if nothing else, there needs to be some kind of communication of planned deprecations/removals that allows extension developers who may not be at the current version to be made aware of breaking changes in advance and be able to say "Hey, we're still using this. We need a path forward."
May 9 2021
The attitude I'm seeing here from WMF is rather concerning. Essentially, it's "it's been decided". Well, that's lovely for WMF, but what about the rest of the wikis out there who maybe don't use (or perhaps even want/need) Parsoid, who don't use the Visual Editor, who don't use Flow, etc.? What do we do?
May 2 2021
Nov 1 2019
Oct 23 2019
Good start, but there's a standard function to add title and namespace (the main concern being that 'ns' is the standard key for namespace). I'm not really a PHP programmer, but I believe the first couple of lines should look like this:
$res = ; ApiQueryBase::addTitleInfo( $res, $title );
Oct 15 2019
Okay, fair enough on both counts. After reporting this, I found a bunch more modules that also allow empty prop values and produce no meaningful output as a result, so if anything's done about it at all, it should probably be a larger project. For the count option, you're right, that would have to be a new option of some kind if it wasn't going to be a breaking change. I believe most database engines can count grouped records well enough as long as the relevant fields are indexed, but MW supports such a wide array of database engines that it would need testing on the whole lot, so that's probably a bigger change than I was thinking.
Oct 13 2019
Just to add a bit more info, It occurred to me to try the equivalent query in the API, and it works fine there, producing the expected:
Oct 11 2019
I just checked, and I'm thinking of the old manually documented examples (Template:ApiEx). They all pointed at enwiki. I finished the API portion of my project quite some time ago, so I haven't had much call to look at the live docs since then.
Oct 10 2019
I hadn't actually thought to check that, Reedy. Oops! That's odd, though. I could swear most of the examples used to work. Did they maybe point to en-wiki or something? Or am I just remembering the old manual documentation? <shrug>
Sep 27 2019
Mar 5 2019
Feb 25 2019
That was fast! It looks like there's another report here that just came in recently. I hadn't noticed it before I posted.
Feb 24 2019
Jun 19 2018
Mar 15 2017
Feb 2 2017
Jan 4 2017
Good to know. Thanks for putting me onto that; at some point in the future, I'll likely strip out all the coding for older versions and add features like assertuser in their place. At the moment, it does me no good, though, since most of the sites I'm targetting are using anywhere between 1.19 to 1.26. Outside MediaWiki sites themselves, I rarely come across sites that are on 1.28+.
Dec 30 2016
Now that I've understood what's going on, and adjusted my bot to compensate, no, but it was an unexpected point of failure, and the error message was uniquely unhelpful in figuring out the real problem.
Dec 20 2016
I should add that the API layer is nominally complete now, give or take a couple of new modules like clientlogin that I'll tackle later on...so I'm probably finished with the ream of bug reports now. :)
Unfortunately, it's nothing that would be useful to you for your testing procedures. I've been developing a C# bot framework that implements about 98% of the API. So, as I've been going through each module to determine what the inputs and results are for all of them, I've been noting the discrepancies.
Dec 19 2016
Anomie: That fix is filed under the wrong task.
Dec 17 2016
Dec 16 2016
Yeah, I'd thought of that. It's a kludge, but you could potentially just add the type and leave the existing information alone. A client could then read the type, and from that, they'd have the key to read in the value.
Aug 7 2016
Jul 28 2016
Just a note on this: based on the tests in PreprocessorTest.php, it seems MW breaks convention and deliberately allows tags with spaces.
Jul 26 2016
Yes. I realize it's not going to be a priority, but if the code is going to check for stupidly named hooks at all, which it already does, I think it should at least cover off basic correct syntax by excluding spaces, single-quotes, double-quotes, equals signs, and slashes. That would be the easy change, since it's just a matter of typing the extra characters into the Regex (and maybe making that a static Regex or whatever PHP supports, rather than repeating the same one in every function). Using the XML spec would probably be even better from a purely technical standpoint, but is probably overkill in this context.
Jul 25 2016
Jul 13 2016
I agree with Nicolas_Raoul here. There's nothing wrong with the app, the server should not be storing useless, deleted categories that have neither pages on the wiki, nor any category tags associated with them. As I said almost two years ago, this is a resource leak, and a malicious user could theoretically add to the table indefinitely, not that that's likely.
Nov 15 2015
Nov 12 2015
Oct 29 2015
Oct 27 2015
Sorry, I clued into how old this task was and that there was a new one a little after commenting. That's certainly a novel idea.
Oct 26 2015
Actually, a List module would probably make even more sense. (D'oh!)
Behaviourally, this would make more sense as a Meta module. That still leaves it as a query module, for whatever internal reasons there are for that, but gets it out of the prop space where it really doesn't belong at all. I'm not 100% sure of this, but it might even still be able to inherit from ApiQueryImageInfo, with little or no change.
Oct 15 2015
Sep 11 2015
Looking around some more, it seems not to be a MediaWiki specific issue at all. This reddit thread gives some insight. https://www.reddit.com/r/chrome/comments/3j1oqk/beta_or_canary_users_have_versions_later_than_44/
I'm getting this error repeatedly on mediawiki.org. Nothing loads at all on most attempts, and gives the above error. Sometimes, I'll get the requested page, but with no CSS at all. So far, I'm not getting the error on any other site, including the original Commons link. Exiting and restarting the browser *might* help, but that might just be a fluke.
Jul 13 2015
I'm not sure if normalizing on its own is sufficient. There might need to be a "normalized" block as well, like there is with a page query, so you can map the input value to the output value.
Jun 18 2015
Jun 17 2015
@TTO: Not at all. I was assuming that people would map the JSON to a more useful collection. It never occurred to me that would actually work with the JSON directly without parsing it. In the context of parsing it, in most languages, parsing a key-value pair is more work (albeit only slightly) than having all the data in the same place (i.e., one element of the array), so it seemed to me that the logical way to go was to convert it to an array.
Jun 16 2015
Jun 14 2015
I'm not convinced that this is fixed from what I see in JobQueueDB.php in the latest nightly. I won't claim to understand all the code, but I think jobs that have been attempted, but failed, are still never being recycled. I can confirm this behaviour as of 1.22, but don't currently have a wiki higher than that where I've been able to confirm it. It seems to be related to either the claimTTL setting (as I originally speculated) or the fact that the job_token/job_token_timestamp fields are still populated in a failed job.
Jun 1 2015
May 27 2015
Actually, I just figured it out. It's the Recent Changes setting that controls it. In earlier versions (confirmed up to 1.22), the preferences menu text makes it appear that that only controls Recent Changes when, in fact, it appears to control several things, including logs, as seen in the more modern preferences menu.
Apr 13 2015
Mar 23 2015
Maybe rather than trying to use the PHP-side docs as primary on the wiki, we should change it to a "see live docs" link and have that as an option that you can click on if the wiki page appears to be out of date. I think that'll give us the best of both worlds: the wiki can still document all the historical info and provide any custom formatting or topic-specific notes that might be required, while the live docs can be used, as Anomie says, as a quick reference.
Would splitting modules into functional and documentation submodules be a useful model here? The API itself could then remain light and fairly clean, only linking to the documentation if needed, while documentation submodules would be off to the side, accessible by the documentation routines to allow documenting the older stuff. The one concern that jumps out at me is that, depending how it's done, lazy programmers could fail to document current and new features at all. This is already a problem with the wiki documentation vs. live modules and obviously, we don't want to simply move the problem to another venue.
Mar 2 2015
The filesize can never be bigger than $wgMaxUploadSize, so they can't keep increasing it indefinitely.
How does this differ from someone just stashing a lot of 1-byte files? Or uploading only the first chunk for many different filenames?
Mar 1 2015
I can confirm that this behaviour still occurs as of 1.24.