User Details
- User Since
- Nov 12 2016, 7:19 AM (224 w, 1 d)
- Availability
- Available
- IRC Nick
- mahir256
- LDAP User
- Unknown
- MediaWiki User
- Mahir256 [ Global Accounts ]
Sat, Feb 20
In the lists in the linked JavaScript file for "bn" and "bpy", the entry for "system" is not present (unlike, say, in the lists for "af" and "ang"), and so by default pages are not rendered using the system font for non-logged-in users on bnwikisource, instead using the Siyam Rupali font by default since it is the first and only font in that list. This task was to rectify this situation so that system fonts would still render by default for non-logged-in users for such languages whose font.ini lists contain wildcards (the * symbol).
Tue, Feb 16
Fri, Feb 12
(@Mbch331 in your next set of patches for language codes, you should fix the typo in the name of the interface messages for the Rohingya code--both "en.json" and "qqq.json" use "rhog" instead of the correct "rohg".)
Thu, Feb 11
Some thoughts I had upon learning of this task (which may or may not be agreed with, but so it may go):
Thu, Feb 4
(never mind; was unaware of developments in T271342 since later in the day on the 29th)
Jan 29 2021
I should note that this has been happening for a while with a number of lexemes as well (such as L401588).
Per https://www.wikidata.org/w/index.php?title=Topic:W2fqtsb5r9css6vf&topic_showPostId=w2ftxgg8qbtux7h4#flow-post-w2ftxgg8qbtux7h4 this ticket can be safely closed for now.
Jan 22 2021
This seems to be part of a general problem of language fallback not working when searching for grammatical features. When using Bengali as the interface language, for example, searching for "singu" does not return "singular" as one might expect; one has to search for "একবচন" to get the same item.
In the case of Chakma, there are cases of resources, such as those hosted by https://github.com/kalpataruboiChakma/, using both the Bengali script and the Chakma script, between which maintaining correspondences in lemmata/forms with different representations would be useful. Having ccp unqualified (in line with the script used on Incubator) and ccp-beng as language codes would thus not be controversial in my view.
Jan 21 2021
Jan 20 2021
In the interest of not having this stalled indefinitely, I've removed the request for syl-sylo from this ticket.
Jan 8 2021
I'd like to shamelessly plug the middle thirty minutes of https://youtu.be/mzqX5iTfzb4 and the entirety of https://youtu.be/DFG5yEZLfC8 as some prior introductions to lexeme editing. I would be more than happy to work on the third (hopefully much more refined) version of such a thing.
Jan 3 2021
Dec 26 2020
Dec 23 2020
I'd just like to note that the Suppress-Script value for Korean according to the official subtag registry is in fact Kore (meaning ko-Kore as a code is redundant in the eyes of a number of organizations).
Dec 22 2020
@ObyEzeilo It is already possible to add Igbo monolingual text values; see for example https://www.wikidata.org/wiki/Q33578#P1705 where two "native label"s (a monolingual text type property) are given.
Dec 14 2020
Dec 13 2020
Dec 11 2020
When opening this ePUB in Calibre's ebook editor, I notice that bold text appears to be rendered properly using fonts available on the system (in other words, the directive within the ebook's main.css to set all text in the body to FreeSerif does not seem to affect bold text), while all other text is rendered using FreeSerif as would otherwise be expected with such CSS. Not sure if this is an e-reader problem or a book problem.
Dec 10 2020
Dec 9 2020
The new fonts do resolve this issue, save for the autogenerated first page in which conjuncts are still malformed.
Nov 29 2020
Nov 22 2020
@Loominade Any particular reason why P2125 (Revised Hepburn romanization) can't be added to forms as appropriate, in lieu of this language code?
Nov 19 2020
I was referring to the former of these; the link to the dataset folder provided on that page used to have label/description/alias stats going back to March for various languages in CSV files as late as last week. Now that page doesn't load (which I can understand being the case right now), but the same folder now only has statistics in those CSV files going back to 28 September.
Are the statistics from before 28 September still available? (They were there when I visited the charts last week, but the current files in the folder containing these figures seems to have lost them.)
Nov 8 2020
Oct 12 2020
I must agree with Bodhi here that having a code for sat-olck is still necessary as it is not guaranteed that Santali speakers outside of India will be able to read it. "Official" in India need not mean "official" in the other countries in which it is spoken, as a closer read of the article on the language should indicate. Besides, we already have separate language codes for a particular language and the scripts in which it is written, including the "default" (such as kk and kk-arab, kk-cyrl, kk-latn, or iu and ike-cans, ike-latn, and similarly for ks, ku, tg, and ug) so I don't see a problem with continuing this trend in the interest of preventing ambiguity.
Sep 20 2020
I have recently migrated all uses of "bat-smg", "bh", "fiu-vro", "roa-rup", "zh-classical, "zh-min-nan", and "zh-yue" on labels, descriptions, and aliases to "sgs", "bho", "vro", "rup", "lzh", "nan", and "yue" respectively, so now would be a great time to at least disallow those language codes.
I have recently moved all uses of the code "zh-classical" for labels/aliases and descriptions to use "lzh" instead, if this helps anyone.
Sep 13 2020
Sep 10 2020
Sep 9 2020
@Amire80 To my knowledge it is not mandated anywhere that all variants of the representation of a lexeme lemma/form must use Q number private use subtags, but rather such uses are possible if other existing subtags within BCP47 cannot adequately indicate the necessary differences. The indication of Japanese written in different scripts can already be done with the BCP47 script subtag, so (¡¡¡)within the scope of language codes(!!!) the items I mentioned which are currently being used for those indications are redundant. Also, as I noted above, the distinction between kyujitai and shinjitai does not lend itself to a non-private-use indicator within the set of possible "ja" language tags, so this task is not meant to discourage the use of those private use subtags in that case.
Sep 8 2020
Sep 7 2020
What time on September 1st was this? According to https://lists.wikimedia.org/pipermail/commons-l/2020-August/008161.html it appears the data is reloaded every Tuesday around 9am UTC; perhaps after two days your changes will then manifest in the query result.
Sep 6 2020
Aug 31 2020
This applies to the Mattermost mobile apps (for iOS and Android).
Aug 30 2020
Now that the only patch tied to this ticket has been abandoned, and now that the Commons Query Service beta uses the sdc: prefix, can this ticket be closed?
Aug 27 2020
Jul 30 2020
(As a note, a patch for this task was already made at https://gerrit.wikimedia.org/r/616609, although it is likely that it did not show up here due to the absence of the line "Bug: T258982" in the patch description.)
Jul 27 2020
Jul 12 2020
Jul 10 2020
@Jdlrobson I was under the impression, based on "Assuming you do not want to do a big redesign and just want to retain the existing main page design, you can follow this guide." in the link given under "Option 2" in the task description, that simply rewriting the main page so that it does not use tables and inline styles was sufficient for remedying the issue put forth by this task. If there is some other requirement for mobile main pages that I am missing, I'm happy to incorporate that.
Jul 9 2020
May 27 2020
May 24 2020
Have you tried adding a space between the '"12"' and the ']'?
May 15 2020
May 6 2020
Are you sure this isn't due to the presence of "[[User:Frettie/consistency check add.js]]" in your common.js?
Apr 25 2020
Apr 10 2020
Apr 9 2020
Mar 23 2020
This does not yield a number that I would expect; for a language like Khowar (khw), which is not updated frequently (or at all?) on Wikidata, I obtained a number close to 270k when searching "haslabel:khw" today, which compared to the numbers in this table for that language doesn't make much sense.
I have a query, https://quarry.wmflabs.org/query/18763, that returns label/description/alias statistics for a set of languages (also mentioned in T197161#4300061). This was derived from a query used by @Pasleim to update this table (which worked until the end of May) and its South Asian counterpart (which worked up until wb_terms updates got turned off). I tried to rewrite the Quarry query to use the new databases (https://quarry.wmflabs.org/query/41692), but running this has not yet succeeded, either using Quarry or directly on tools-login. I am not sure whether this rewritten query can be simplified beyond what's written at present, so I was hoping there might be a better way of obtaining these statistics via SQL that does not presently time out.
Mar 20 2020
The use of the API works up to a point. I am noticing that for GeoJSON files at or above 250KB I'm getting read timeouts when using Pywikibot. Any way to get past those errors?
Jan 14 2020
Jan 7 2020
Dec 30 2019
Dec 16 2019
Nov 5 2019
So somehow @Sic19 managed to circumvent the size limit check with "Data:Canada/Nunavut.map" (I suppose AWB is old enough that it's not sensitive to the workings of tabular data). Is there a way to make the size limit check apply to the compacted data?
Oct 26 2019
As a note, this is desperately needed for a number of Indian languages (including Bengali) in which the standardization of spellings according to one standard or another does not preclude other spellings from being considered acceptable (even in the absence of some characteristic unifying the alternative spellings).
Sep 14 2019
This error was the result of a title blacklist rule, which has been adjusted. (Once we get to L1000000, it will need to be adjusted again.)
Sep 13 2019
Aug 25 2019
@StevenJ81 what are your thoughts on this request?
Aug 10 2019
32 months later, @Yurik, what's the status regarding implementing a new storage architecture for datasets (assuming that a stopgap measure such as uploading JSON in compact formats is somehow not tenable)? T200968 has officially opened up the floodgates for the upload of larger datasets, but there is still the issue, even when one does split up the data into discrete chunks, of overshooting this 2MB limit. Take, for example, the boundaries of https://www.wikidata.org/wiki/Q338425 : how does one properly split the data up into small chunks when the borders of its constituent elements (which I'm sure people would upload separately if those constituent element borders formed a partition (set theory) of the district) are not known?
Jul 14 2019
Jul 2 2019
Jun 21 2019
The bug, if there is any, is in GeoNames or in wherever else @Lsj's bot obtains its geographic information. Ultimately it must be corrected there or there is still a risk of reintroducing it to Wikidata.
To present a more concrete use case for such functionality, @debt, the infoboxes on articles for localities in India have slots listing the national/state-level parliamentary constituencies that they're located in along with the current representatives in those constituencies. These slots, which at one point were prefilled from Wikidata, no longer work since the misuses of P585 ("point in time") on the Wikidata items for those constituencies that allowed such information to be present have been removed. In the absence of a property linking a Wikidata item about a constituency to an election involving that constituency (to use the example of a national-level constituency in Kolkata, from Q3348171 to Q63988950), it would be helpful to obtain a list of items which link back to an item for a constituency via something akin to haswbstatement (in this case, to continue using the previous example, akin to haswbstatement:P1001=Q3348171) so that the most recent election information (from electorate size to successful candidates to numbers of spoilt votes, to name a few facts) could thus be obtained with a few more steps.
May 22 2019
May 1 2019
Perhaps this issue of conciseness in the data model is something worth addressing to @Smalyshev and @Gehel?
Apr 30 2019
Apr 21 2019
@ArielGlenn It appears that particle physics is a massively collaborative enterprise, so that the results presented in a single paper can have thousands of people behind them, all of whom are credited (hence the particularly large revision size).
Apr 16 2019
@Mrjohncummings @MSantos have any clearer directions for development come out of your February discussion?
Apr 15 2019
Apr 13 2019
Mar 21 2019
Mar 16 2019
https://quarry.wmflabs.org/query/28286 lists all pages in the Page: namespace below 500 bytes, in ascending order of page length (so that the shortest pages in the Page: namespace show up first). The commented lines, if uncommented, will list those pages that have already been proofread but not validated--this is presently based on links to the category "Proofread" rather than page properties, but I'm sure this can be fixed easily.
Mar 14 2019
While I believe it is possible to get a list of such pages via Quarry queries, being able to view these within the special pages themselves would be quite helpful (and not just for the two aforementioned namespaces, and not just for Wikisources either).
Feb 19 2019
I made the circular element for WikiProject India's regular logo (https://commons.wikimedia.org/wiki/File:WikiProject_India_bars.svg) as a substitute for the chakra in the center of the Indian flag, and not initially in the interest of having a logo fit perfectly inside a circular frame. (The logo which is present on the account to which David linked is meant to represent a datathon running from the 21st to the 24th and will be substituted with the regular logo at the datathon's conclusion.)
Feb 18 2019
That looks great!
Feb 17 2019
Is the language that odd? https://en.wikipedia.org/wiki/Okinawan_language