Sat, Jun 12
Clearly importing *all* languages (lato sensu) from Wikidata doesn't seems to be a good idea. But on the other hand, it would make sense to import some "important" missing languages. Between 600 and 10 000, there is probably a sensible middle-ground.
Fri, Jun 11
Thu, May 27
I change the task a bit. This kind of tool in called a generator (see Help:Create a new generator on lingualibre).
@Theklan : in the third screen of the RecordWizard, you have a box at the bottom right called ExternalTools.
This tool allows to add a SPARQL query, for instance a query on Lexemes.
It's not 100% direct but I think it's good enough, isn't it ?
Wed, May 26
Yes, thanks a lot @Poslovitch !
May 17 2021
ping @Ladsgroup ?
May 11 2021
Yes, the challenge itself is only 2 hours (it's already long enough, I can attest it :P).
I think it's important to prepare some page, explaning the principle of the challenge and a list of people who want to participate.
May 2 2021
It looks great
- there was a small typo, I forgot a comma, it is dezhañ, dezhi and not dezhañdezhi
- some inflected forms are quite rare but that is to be expected
Apr 29 2021
+1, I volunteer and I'll be very glad to help reproduce this fun activity.
Apr 27 2021
The history says it was added by @0x010C (https://lingualibre.org/index.php?title=MediaWiki:Common.css&diff=256393&oldid=256389) so I guess we will not know before long...
Apr 26 2021
Apr 24 2021
Apr 23 2021
Yes, we noticed the path inconsistency when upgrading LinguaLibre to Mediawiki 1.35 as it broke some tools (including but not limited to Blazegraph).
Apr 21 2021
FYI, I just created of first short draft for stopwords: https://github.com/belett/Breton_lexicography/blob/main/Stopwords%20br
Apr 19 2021
https://www.wikidata.org/wiki/User:Teester/EntityShape.js is not exactly the tool describe in this task but it's an example of a « simple tool for checking Wikidata »
Apr 6 2021
Mar 25 2021
Mar 17 2021
I don't think we really have a tool for that.
Mar 15 2021
Feb 28 2021
Feb 25 2021
The problem came from old corrupted file (File page was still there but not the files themselves).
Once these file were deleted, the error message disappeared.
Maybe there is still some lost files somewhere but I think we can close this task.
Feb 21 2021
Sadly I know the problem, not the solution...
@Yug create lexemes would be very nice but quite difficult. "language + form" is not enough, at least the lexical category is mandatory to create a Lexeme.
Other data are needed to to determine if the lexeme exists and is the same (for instance for cases like "fils" - threads L10371- and "fils" - son L15917 - or "tour" L2330 and "tour" L2331).
How could we solve these problems?
Jan 29 2021
I also think a community consultation is not needed: this is an old historic file that make no sense for most Wikisource.
Jan 28 2021
Jan 24 2021
Yes, the problem is still here and open.
Jan 23 2021
This is a great idea.
Jan 13 2021
Jan 5 2021
I can confirm that it seems to be fixed.
Nov 20 2020
Sep 30 2020
Sep 10 2020
Just a quick update, we definitely need the distinction between "ñ" and "n".
It's quite rare but there is some pair of word where the tilde is the only distinction, for instance mañ ("this") and man ("he/she stays" but also "moss").
Sep 4 2020
Aug 26 2020
Aug 21 2020
Maybe we can store the list somewhere online (on a wiki page ? on a pad ? either is good for me) and do a collective review (strike the wrods that are not really stopwords, for instance all the numbers are not really stopwords, there is adverb too I'm not sure we should keep).
Jun 12 2020
Jun 4 2020
I strongly agree.
May 7 2020
Hi @Aklapper go on any page on any Wikimedia projects, you'll see that some label are not retrieve and the Wikidata Q identifier (or P identifier) is displayed instead.
Apr 27 2020
Thanks for the merge @Charlotte
Mar 12 2020
Feb 19 2020
@Gehel very true. That said, it won't hurt performances either and what about all the other problems of not being able to stop a query? (I hate when I have to restart my browser and/or computer who froze just because I dumbly forgot to remove a wdt:P279*)
Feb 5 2020
Sorry about the ping then.
Hello, is someone working on this ? Thanks.
Jan 11 2020
Dec 10 2019
More exactly: the code fr-ca works fine (everywhere AFAIK) but for some reason the name "français canadien" doesn't appears on the list (everywhere AFAIK). I think this is a different and separate bug and a new ticket would be more appropriate.
Nov 24 2019
Nov 12 2019
Some precision: apparently (from what I've seen) it appears only in English and French and only for the P195 property.
Nov 5 2019
Oct 7 2019
Sep 20 2019
Sep 18 2019
Aug 31 2019
Aug 15 2019
FYI, there is a table at the hackathon working on it right now, at least looking at the first possible obstacle.
Aug 14 2019
And now it seems that http://wikidata.rawgraphs.io/ doesn't work at all any more :/
If it's not temporary then we should probably remove the link altogether :(
Aug 7 2019
Jul 31 2019
It seems to be fixed, isn't it?
Jul 25 2019
Ok, I understand. I was just suggested this as "incubated" wikis is the step just before "small" wikis.
Could I suggest that some people look/work on T212881 ? (not "small wiki" per se, but pre-small wiki ;) )
May 13 2019
May 3 2019
Apr 16 2019
Apr 15 2019
Hi, I just tested this new dashboard. The visualisations are great, but I'm more a number cruncher myself.
Feb 24 2019
For the record, last December after https://www.wikidata.org/wiki/Wikidata:Property_proposal/Astronomical_coordinates, I added celestial coordinates on M31: https://www.wikidata.org/wiki/Q2469#P625 (2 months later nobody seems to complain)
Feb 10 2019
Did someone did something?
Jan 12 2019
Indeed, it's a bit strange, without two brackets, it could be
- pizza (Italian: Italian dish)
- pizza (Italian / Italian dish)
- pizza (Italian dish)<sup>Italian</sup>
Jan 11 2019
Jan 6 2019
I think the most important is the lemma, so I would put it first. But I'm not sure on how not mix the gloss and the language:
- Mutter (German, female parent)
Or maybe better:
- Mutter (female parent) (German)
Jan 4 2019
Adding the language would not solve entirely the problem but I think it would be a good thing nonetheless.
Dec 24 2018
I'm guessing that when Lea says Lexemes (with an uppercase L) she thinks of Lexemes in Wikidata (which has been done partially already).
Dec 11 2018
Oh that's strange, it was apparently just a temporary bug. Now It works.
Nov 18 2018
Some of it happened for sure, but probably not all.
Aug 31 2018
Other idea: pass the cursor other and get translations (for people learning a language)
Aug 10 2018
Jun 21 2018
A small story to show why this is important (at least to me) and should be fixed quickly (in my opinion).
Jun 14 2018
On Firefox 60: broken
On Chromium: everything works as expected
Jun 4 2018
Yes, I didn't thought about it but commas should be i18ned "،" for Arabic and Persian, "、" for Chinese and Japanese (and maybe others for other languages but I don't know them, is there a list for i18n comma somewhere?).
Jun 1 2018
As explained in my prevuious message, I agree we need to specify at least language and script. For the rest (country, orthography reform, ...), I think the best way to store this kind of information is to use property in the lexeme itself. The advantages of the property is it is really flexible and so we can decide a psoteriori what kind of information we want to store in one lexeme.
May 30 2018
See too the broader related ticket : T195740
May 28 2018
May 26 2018
Yes for a link (I forget about it, shame on me) and yes too, for multiple lexemes (especially "tour"@fr will probably have 3 lexemes with the same features, for the equivalent of "tower"@en, "round"@en and "pottery wheel"@en).
Here is a proposal of what the warning message could look like: