Sat, Jun 5
I have been semi-regularly migrating occasional additions of labels/descriptions/aliases in the languages codes noted in my original comment since that comment, in addition to an "no" to "nb" migration with @jhsoby's approval. The sooner these codes can be disallowed, the less work this will be for everyone.
May 11 2021
Apr 28 2021
With Edge 90.0.818.49 on a 64-bit Windows system, I am able to get an item suggestion box to appear using the given steps.
Apr 19 2021
Apr 18 2021
Apr 7 2021
Apr 2 2021
Mar 20 2021
Mar 18 2021
I would like to assemble a JSON string with which to generate a lexeme using a single wbeditentity call.
Mar 17 2021
While I am not opposed to this language code request, @FrancisTyers, I do wonder what plans you have for this lexeme language code and the others you have requested, given that as yet you have not made any edits in the realm of lexicographical data on Wikidata in *any* language (as part of a general absence from Wikidata to speak of) and have not noted anyone who is planning to work with these languages.
Mar 15 2021
Feb 20 2021
Feb 16 2021
Feb 12 2021
(@Mbch331 in your next set of patches for language codes, you should fix the typo in the name of the interface messages for the Rohingya code--both "en.json" and "qqq.json" use "rhog" instead of the correct "rohg".)
Feb 11 2021
Some thoughts I had upon learning of this task (which may or may not be agreed with, but so it may go):
Feb 4 2021
(never mind; was unaware of developments in T271342 since later in the day on the 29th)
Jan 29 2021
I should note that this has been happening for a while with a number of lexemes as well (such as L401588).
Per https://www.wikidata.org/w/index.php?title=Topic:W2fqtsb5r9css6vf&topic_showPostId=w2ftxgg8qbtux7h4#flow-post-w2ftxgg8qbtux7h4 this ticket can be safely closed for now.
Jan 22 2021
This seems to be part of a general problem of language fallback not working when searching for grammatical features. When using Bengali as the interface language, for example, searching for "singu" does not return "singular" as one might expect; one has to search for "একবচন" to get the same item.
In the case of Chakma, there are cases of resources, such as those hosted by https://github.com/kalpataruboiChakma/, using both the Bengali script and the Chakma script, between which maintaining correspondences in lemmata/forms with different representations would be useful. Having ccp unqualified (in line with the script used on Incubator) and ccp-beng as language codes would thus not be controversial in my view.
Jan 21 2021
Jan 20 2021
In the interest of not having this stalled indefinitely, I've removed the request for syl-sylo from this ticket.
Jan 8 2021
I'd like to shamelessly plug the middle thirty minutes of https://youtu.be/mzqX5iTfzb4 and the entirety of https://youtu.be/DFG5yEZLfC8 as some prior introductions to lexeme editing. I would be more than happy to work on the third (hopefully much more refined) version of such a thing.
Jan 3 2021
Dec 26 2020
Dec 23 2020
I'd just like to note that the Suppress-Script value for Korean according to the official subtag registry is in fact Kore (meaning ko-Kore as a code is redundant in the eyes of a number of organizations).
Dec 22 2020
@ObyEzeilo It is already possible to add Igbo monolingual text values; see for example https://www.wikidata.org/wiki/Q33578#P1705 where two "native label"s (a monolingual text type property) are given.
Dec 14 2020
Dec 13 2020
Dec 11 2020
When opening this ePUB in Calibre's ebook editor, I notice that bold text appears to be rendered properly using fonts available on the system (in other words, the directive within the ebook's main.css to set all text in the body to FreeSerif does not seem to affect bold text), while all other text is rendered using FreeSerif as would otherwise be expected with such CSS. Not sure if this is an e-reader problem or a book problem.
Dec 10 2020
Dec 9 2020
The new fonts do resolve this issue, save for the autogenerated first page in which conjuncts are still malformed.
Nov 29 2020
Nov 22 2020
Nov 19 2020
I was referring to the former of these; the link to the dataset folder provided on that page used to have label/description/alias stats going back to March for various languages in CSV files as late as last week. Now that page doesn't load (which I can understand being the case right now), but the same folder now only has statistics in those CSV files going back to 28 September.
Are the statistics from before 28 September still available? (They were there when I visited the charts last week, but the current files in the folder containing these figures seems to have lost them.)
Nov 8 2020
Oct 12 2020
I must agree with Bodhi here that having a code for sat-olck is still necessary as it is not guaranteed that Santali speakers outside of India will be able to read it. "Official" in India need not mean "official" in the other countries in which it is spoken, as a closer read of the article on the language should indicate. Besides, we already have separate language codes for a particular language and the scripts in which it is written, including the "default" (such as kk and kk-arab, kk-cyrl, kk-latn, or iu and ike-cans, ike-latn, and similarly for ks, ku, tg, and ug) so I don't see a problem with continuing this trend in the interest of preventing ambiguity.
Sep 20 2020
I have recently migrated all uses of "bat-smg", "bh", "fiu-vro", "roa-rup", "zh-classical, "zh-min-nan", and "zh-yue" on labels, descriptions, and aliases to "sgs", "bho", "vro", "rup", "lzh", "nan", and "yue" respectively, so now would be a great time to at least disallow those language codes.
Sep 13 2020
Sep 10 2020
Sep 9 2020
@Amire80 To my knowledge it is not mandated anywhere that all variants of the representation of a lexeme lemma/form must use Q number private use subtags, but rather such uses are possible if other existing subtags within BCP47 cannot adequately indicate the necessary differences. The indication of Japanese written in different scripts can already be done with the BCP47 script subtag, so (¡¡¡)within the scope of language codes(!!!) the items I mentioned which are currently being used for those indications are redundant. Also, as I noted above, the distinction between kyujitai and shinjitai does not lend itself to a non-private-use indicator within the set of possible "ja" language tags, so this task is not meant to discourage the use of those private use subtags in that case.
Sep 8 2020
Sep 7 2020
What time on September 1st was this? According to https://lists.wikimedia.org/pipermail/commons-l/2020-August/008161.html it appears the data is reloaded every Tuesday around 9am UTC; perhaps after two days your changes will then manifest in the query result.
Sep 6 2020
Aug 31 2020
This applies to the Mattermost mobile apps (for iOS and Android).
Aug 30 2020
Now that the only patch tied to this ticket has been abandoned, and now that the Commons Query Service beta uses the sdc: prefix, can this ticket be closed?
Aug 27 2020
Jul 30 2020
(As a note, a patch for this task was already made at https://gerrit.wikimedia.org/r/616609, although it is likely that it did not show up here due to the absence of the line "Bug: T258982" in the patch description.)
Jul 27 2020
Jul 12 2020
Jul 10 2020
@Jdlrobson I was under the impression, based on "Assuming you do not want to do a big redesign and just want to retain the existing main page design, you can follow this guide." in the link given under "Option 2" in the task description, that simply rewriting the main page so that it does not use tables and inline styles was sufficient for remedying the issue put forth by this task. If there is some other requirement for mobile main pages that I am missing, I'm happy to incorporate that.
Jul 9 2020
May 27 2020
May 24 2020
Have you tried adding a space between the '"12"' and the ']'?
May 15 2020
May 6 2020
Are you sure this isn't due to the presence of "[[User:Frettie/consistency check add.js]]" in your common.js?
Apr 25 2020
Apr 10 2020
Apr 9 2020
Mar 23 2020
This does not yield a number that I would expect; for a language like Khowar (khw), which is not updated frequently (or at all?) on Wikidata, I obtained a number close to 270k when searching "haslabel:khw" today, which compared to the numbers in this table for that language doesn't make much sense.
I have a query, https://quarry.wmflabs.org/query/18763, that returns label/description/alias statistics for a set of languages (also mentioned in T197161#4300061). This was derived from a query used by @Pasleim to update this table (which worked until the end of May) and its South Asian counterpart (which worked up until wb_terms updates got turned off). I tried to rewrite the Quarry query to use the new databases (https://quarry.wmflabs.org/query/41692), but running this has not yet succeeded, either using Quarry or directly on tools-login. I am not sure whether this rewritten query can be simplified beyond what's written at present, so I was hoping there might be a better way of obtaining these statistics via SQL that does not presently time out.
Mar 20 2020
The use of the API works up to a point. I am noticing that for GeoJSON files at or above 250KB I'm getting read timeouts when using Pywikibot. Any way to get past those errors?
Jan 14 2020
Jan 7 2020
Dec 30 2019
Dec 16 2019
Nov 5 2019
So somehow @Sic19 managed to circumvent the size limit check with "Data:Canada/Nunavut.map" (I suppose AWB is old enough that it's not sensitive to the workings of tabular data). Is there a way to make the size limit check apply to the compacted data?
Oct 26 2019
As a note, this is desperately needed for a number of Indian languages (including Bengali) in which the standardization of spellings according to one standard or another does not preclude other spellings from being considered acceptable (even in the absence of some characteristic unifying the alternative spellings).
Sep 14 2019
Sep 13 2019
Aug 25 2019
@StevenJ81 what are your thoughts on this request?
Aug 10 2019
32 months later, @Yurik, what's the status regarding implementing a new storage architecture for datasets (assuming that a stopgap measure such as uploading JSON in compact formats is somehow not tenable)? T200968 has officially opened up the floodgates for the upload of larger datasets, but there is still the issue, even when one does split up the data into discrete chunks, of overshooting this 2MB limit. Take, for example, the boundaries of https://www.wikidata.org/wiki/Q338425 : how does one properly split the data up into small chunks when the borders of its constituent elements (which I'm sure people would upload separately if those constituent element borders formed a partition (set theory) of the district) are not known?