User Details
- User Since
- Jun 14 2018, 8:31 PM (306 w, 4 h)
- Availability
- Available
- LDAP User
- Unknown
- MediaWiki User
- Madepossiblebyviewerslikeyou [ Global Accounts ]
Dec 20 2018
Aug 30 2018
One last question about the new REST api (which may slightly outside the scope of this task discussion):
- for 404 errors that result from the page not being found, how stable is the "type" field containing "https://mediawiki.org/wiki/HyperSwitch/errors/not_found", or the title being set to "Not found." ? I could just do json['type'] === 'https://mediawiki.org/wiki/HyperSwitch/errors/not_found' or json['title'] === 'Not found.', but those feel like pretty fragile checks if the API is ever changed.
Aug 29 2018
How well supported will the REST api be when we find bugs? This isn't the first time I've reported a bug with the api, and I imagine it won't be the last.
For now, I suppose I can just split up each page into its own request. but I'm worried about the scalability of my project if we do so.
In that case, for the REST api, do you have any plans to add batched support for requesting multiple pages at once?
Hi, I just ran into the same issue with this, and this is breaking one of my projects which used to work.
Jun 18 2018
Are there any plans to add batched-call support to the REST API in future versions? That's the really the only blocker for me being able to switch, since we're likely going to need batching more and more as we scale up our product.
I guess my question is whether staying with the MediaWiki TextExtracts API is okay, given that we already have workarounds in place? (and switching to the new API means losing the ability to batch calls)
Oh, interesting, can we add that to the TextExtracts documentation that the REST API is preferred? I saw that disclaimer section, but it didn't say anything about this issue with plaintext extraction having this bug, or about the REST API. Our current workaround is to strip out any text in-between square-brackets.