User Details
- User Since
- Oct 25 2014, 1:38 AM (267 w, 3 d)
- Availability
- Available
- LDAP User
- Unknown
- MediaWiki User
- RobinHood70 [ Global Accounts ]
Nov 1 2019
Oct 23 2019
Good start, but there's a standard function to add title and namespace (the main concern being that 'ns' is the standard key for namespace). I'm not really a PHP programmer, but I believe the first couple of lines should look like this:
$res = []; ApiQueryBase::addTitleInfo( $res, $title );
Oct 15 2019
Okay, fair enough on both counts. After reporting this, I found a bunch more modules that also allow empty prop values and produce no meaningful output as a result, so if anything's done about it at all, it should probably be a larger project. For the count option, you're right, that would have to be a new option of some kind if it wasn't going to be a breaking change. I believe most database engines can count grouped records well enough as long as the relevant fields are indexed, but MW supports such a wide array of database engines that it would need testing on the whole lot, so that's probably a bigger change than I was thinking.
Oct 13 2019
Just to add a bit more info, It occurred to me to try the equivalent query in the API, and it works fine there, producing the expected:
<root><tplarg><title>1</title><part><name index=\"1\"/><value/></part></tplarg></root>.
Oct 11 2019
I just checked, and I'm thinking of the old manually documented examples (Template:ApiEx). They all pointed at enwiki. I finished the API portion of my project quite some time ago, so I haven't had much call to look at the live docs since then.
Oct 10 2019
I hadn't actually thought to check that, Reedy. Oops! That's odd, though. I could swear most of the examples used to work. Did they maybe point to en-wiki or something? Or am I just remembering the old manual documentation? <shrug>
Sep 27 2019
Mar 5 2019
Feb 25 2019
That was fast! It looks like there's another report here that just came in recently. I hadn't noticed it before I posted.
Feb 24 2019
Jun 19 2018
Mar 15 2017
Feb 2 2017
Jan 4 2017
Good to know. Thanks for putting me onto that; at some point in the future, I'll likely strip out all the coding for older versions and add features like assertuser in their place. At the moment, it does me no good, though, since most of the sites I'm targetting are using anywhere between 1.19 to 1.26. Outside MediaWiki sites themselves, I rarely come across sites that are on 1.28+.
Dec 30 2016
Now that I've understood what's going on, and adjusted my bot to compensate, no, but it was an unexpected point of failure, and the error message was uniquely unhelpful in figuring out the real problem.
Dec 20 2016
I should add that the API layer is nominally complete now, give or take a couple of new modules like clientlogin that I'll tackle later on...so I'm probably finished with the ream of bug reports now. :)
Unfortunately, it's nothing that would be useful to you for your testing procedures. I've been developing a C# bot framework that implements about 98% of the API. So, as I've been going through each module to determine what the inputs and results are for all of them, I've been noting the discrepancies.
Dec 19 2016
Anomie: That fix is filed under the wrong task.
Dec 17 2016
Dec 16 2016
Yeah, I'd thought of that. It's a kludge, but you could potentially just add the type and leave the existing information alone. A client could then read the type, and from that, they'd have the key to read in the value.
Aug 7 2016
Jul 28 2016
Just a note on this: based on the tests in PreprocessorTest.php, it seems MW breaks convention and deliberately allows tags with spaces.
Jul 26 2016
Yes. I realize it's not going to be a priority, but if the code is going to check for stupidly named hooks at all, which it already does, I think it should at least cover off basic correct syntax by excluding spaces, single-quotes, double-quotes, equals signs, and slashes. That would be the easy change, since it's just a matter of typing the extra characters into the Regex (and maybe making that a static Regex or whatever PHP supports, rather than repeating the same one in every function). Using the XML spec would probably be even better from a purely technical standpoint, but is probably overkill in this context.
Jul 25 2016
Jul 13 2016
I agree with Nicolas_Raoul here. There's nothing wrong with the app, the server should not be storing useless, deleted categories that have neither pages on the wiki, nor any category tags associated with them. As I said almost two years ago, this is a resource leak, and a malicious user could theoretically add to the table indefinitely, not that that's likely.
Nov 15 2015
Nov 12 2015
Oct 29 2015
Oct 27 2015
Sorry, I clued into how old this task was and that there was a new one a little after commenting. That's certainly a novel idea.
Oct 26 2015
Actually, a List module would probably make even more sense. (D'oh!)
Behaviourally, this would make more sense as a Meta module. That still leaves it as a query module, for whatever internal reasons there are for that, but gets it out of the prop space where it really doesn't belong at all. I'm not 100% sure of this, but it might even still be able to inherit from ApiQueryImageInfo, with little or no change.
Oct 15 2015
Sep 11 2015
Looking around some more, it seems not to be a MediaWiki specific issue at all. This reddit thread gives some insight. https://www.reddit.com/r/chrome/comments/3j1oqk/beta_or_canary_users_have_versions_later_than_44/
I'm getting this error repeatedly on mediawiki.org. Nothing loads at all on most attempts, and gives the above error. Sometimes, I'll get the requested page, but with no CSS at all. So far, I'm not getting the error on any other site, including the original Commons link. Exiting and restarting the browser *might* help, but that might just be a fluke.
Jul 13 2015
I'm not sure if normalizing on its own is sufficient. There might need to be a "normalized" block as well, like there is with a page query, so you can map the input value to the output value.
Jun 18 2015
Jun 17 2015
@TTO: Yes, I can see where you're coming from. It's one of the pitfalls of working in a different language is that I see the JSON as data to be parsed, not a fully realized data object, as it obviously would be when working in JavaScript. As Anomie says, though, even in JS, it becomes usage-dependent as to which way is easiest to deal with. In the end, at least for me, the difference is minor, so if the decision is to go back to an object, and possibly remove the id field as redundant, I can deal with that.
@TTO: Not at all. I was assuming that people would map the JSON to a more useful collection. It never occurred to me that would actually work with the JSON directly without parsing it. In the context of parsing it, in most languages, parsing a key-value pair is more work (albeit only slightly) than having all the data in the same place (i.e., one element of the array), so it seemed to me that the logical way to go was to convert it to an array.
Jun 16 2015
Jun 14 2015
I'm not convinced that this is fixed from what I see in JobQueueDB.php in the latest nightly. I won't claim to understand all the code, but I think jobs that have been attempted, but failed, are still never being recycled. I can confirm this behaviour as of 1.22, but don't currently have a wiki higher than that where I've been able to confirm it. It seems to be related to either the claimTTL setting (as I originally speculated) or the fact that the job_token/job_token_timestamp fields are still populated in a failed job.
Jun 1 2015
May 27 2015
Actually, I just figured it out. It's the Recent Changes setting that controls it. In earlier versions (confirmed up to 1.22), the preferences menu text makes it appear that that only controls Recent Changes when, in fact, it appears to control several things, including logs, as seen in the more modern preferences menu.
Apr 13 2015
Mar 23 2015
Maybe rather than trying to use the PHP-side docs as primary on the wiki, we should change it to a "see live docs" link and have that as an option that you can click on if the wiki page appears to be out of date. I think that'll give us the best of both worlds: the wiki can still document all the historical info and provide any custom formatting or topic-specific notes that might be required, while the live docs can be used, as Anomie says, as a quick reference.
Would splitting modules into functional and documentation submodules be a useful model here? The API itself could then remain light and fairly clean, only linking to the documentation if needed, while documentation submodules would be off to the side, accessible by the documentation routines to allow documenting the older stuff. The one concern that jumps out at me is that, depending how it's done, lazy programmers could fail to document current and new features at all. This is already a problem with the wiki documentation vs. live modules and obviously, we don't want to simply move the problem to another venue.
Mar 2 2015
The filesize can never be bigger than $wgMaxUploadSize, so they can't keep increasing it indefinitely.
How does this differ from someone just stashing a lot of 1-byte files? Or uploading only the first chunk for many different filenames?
Mar 1 2015
I can confirm that this behaviour still occurs as of 1.24.