- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Aug 22 2022
Jan 19 2021
May 29 2020
I tried restoring ")" more than 10 times here. Everytime I add ) from my smartphone it magically disappears.
May 23 2020
Apr 29 2020
The bot recently got approvals in Kn, vec and te. I have applied on some more wikis for approvals.
Mar 24 2020
Jan 28 2020
May 17 2019
May 16 2019
May 15 2019
May 11 2019
I dont know how else could I get the attention https://meta.wikimedia.org/wiki/WikiAI
Apr 10 2019
In T121470#5100493, @TheDJ wrote:In T121470#5100476, @Capankajsmilyo wrote:atleast bring such backlog to attention of developers here?
I think it needed to be brought to YOUR attention. The rest of us here are pretty much aware of this ;) Remember, this is a non-profit, aided by a few volunteers. There is no resourcing as with some other major commercial websites.
Anything that isn't part of the annual planning of the Wikimedia foundation is not scheduled and funded and depends on volunteers to work on (even though some of those volunteers are WMF employees). That's almost all the tickets which are thus not resourced and funded and everything has a gazillion dependencies, short and longterm ideas and/or required skillsets. It is difficult to present a 'top 100' for that reason.
This is strange. Even the first task is still unresolved. Can someone please guide me how we go about in the phabricator about backlog clearing?
In T121470#5100480, @PerfektesChaos wrote:Internationalization issues, cultural aspects, script and language, world wide impact goes in similar way for templates and modules.
As long as very low level functionality, utilized by programmers only to build other templates on top, global basic software may be offered. In Lua this is possible today by libraries, some are used now.
As soon it is some kind of text or figures or graphics visible to a reader, that global stuff is not easy since that needs to be adopted to many many requests and formatting details by local culture and even community policy.
Sounds pretty: There is a big bag of global things, and a local project has nothing to work on and may use it without taking care of anything. But this is not how it works in reality. Just displaying a number is a challenge, even more a number with measuring unit, which needs a lot of adaption per project.
Any progress?
In T121470#5100458, @TheDJ wrote:@Capankajsmilyo When there is no patch and no planning mentioned, you can assume that it is only being discussed at this time. There is no fixed planning to work on this (As there isn't for most of our 20 000 open tickets).
This task was authored in 2006, now its 2019. This seems like a never ending process. Any comments on status and roadmap, or should this task be closed as rejected?
In T121470#5100412, @daniel wrote:In T121470#5099782, @Capankajsmilyo wrote:What's taking this so long? Has any decision been made on how to implement this? There are a lot of wikis like sawiki which don't have enough tech contributors. They will be more than happy to adopt it as compared to frwiki which has a huge army of tech people. We can start with allowing sawiki to use modules of enwiki from mediawiki directly. Any comments?
Cross-wiki code sharing is difficult, for security reasons (immediate deployment on 1 thousand websites, code gets run in the browsers of millions of people, with no code review) but also for because it requires the code to use proper internationalization mechanisms, which are currently not in place or not easy to use or simply not used.
There is so far no agreement on how exactly this should work. I mean what exactly it should do. Here is the summary from a discussion we had at the Wikimedia Technical Conference last year:
- Making it easier to localize gadgets. Descoped from "tools".
- As a volunteer, I want to reuse gadgets without rebuilding them, in my own project and my own language.
- Research with community needed, on how it would work for them, and how governance would work.
- Need to create a prototype in which we decide where gadgets would be stored and where their definitions will be
- Determine if that fits all use-cases, and then build prototype.
- Build gadget repository, perhaps searchable?
- Update most/all gadgets to use it.
- At the same time, deal with translations issues -- where to store them, how to allow as many as possible to edit, try to standardize use by gadgets
- Resources: 1-1.5 years to work with community engagement. 6months of Eng time.
- Concerns: perhaps not big enough for WMF approval. Resourcing perhaps from Platform Evolution or CommTech Wishlist.
See https://www.mediawiki.org/wiki/Wikimedia_Technical_Conference/2018/Session_notes/Working_together_to_develop_our_roadmap. @TheDJ was leading the discussion, perhaps he has some thoughts. @Tgr wrote a prototype for maintaining Gadgets on git a while ago. I really like that idea.
How can I help you? I really need such a functionality at the earliest.
What's taking this so long? Has any decision been made on how to implement this? There are a lot of wikis like sawiki which don't have enough tech contributors. They will be more than happy to adopt it as compared to frwiki which has a huge army of tech people. We can start with allowing sawiki to use modules of enwiki from mediawiki directly. Any comments?
Mar 12 2019
What was the result??
@Yurik any progress on bot based one?
Nov 11 2018
Nov 7 2018
In T208827#4726227, @Jdlrobson wrote:Hi @Capankajsmilyo I'm going to need some more information here!
- What browser are you using (preferably a browser user agent!)
- What happens when you try to look on desktop?
- Do you see the following message when you click the edit icon at the bottom of the screen?
Nov 6 2018
Nov 5 2018
In T121470#4721390, @Yurik wrote:@Capankajsmilyo AFAIK, WMF is not working on this. When I have some time, I will try to set up a bot to make this possible with the existing technology. A typical workflow:
- A template or a module is created on mediawiki.org (MW is better because its community is more dev-focused, whereas Commons tends to be content-focused)
- all localization strings are placed in the tabular data on Commons to simplify translation
- template parameters are also placed in a tabular data on Commons
- All strings are used via the Module:TNT
- Some well known infobox is placed at the top of the template/module documentation to indicate that this module is shared between multiple wikis, and should not be changed anywhere else.
- A bot looks for all modules/templates on MW.org that have that infobox, and copies them automatically to all other wikis using the sitelink list in Wikidata
- If someone wants to use that template/module in a wiki X, they just need to copy/paste it to the wiki X, and add the sitelink - the bot will automatically keep it up to date from there on.
- If a wiki decides to "fork" their version, they can simply remove the infobox from the docs.
This one seems like a popular demand, but not much has changed since I first saw this ticket. Can anyone please update on the status and whats the direction in which we are moving on this one?
In T208700#4719636, @Aklapper wrote:@Capankajsmilyo: How is this related to the API?
Nov 1 2018
In T208437#4713599, @Yurik wrote:I agree this is needed, but perhaps we can already set this up without waiting for the complicated change in the system. There is one big reason modules and templates differ - language. So if we move translations out of the templates and modules, we can simply copy/paste them without any changes between wikis. Moreover, I think we can even automate that - e.g. any template or module that has some sort of a flag (e.g. embedded well known template) will be automatically copied from the central wiki to all other ones that want to use it. I wrote a Module:TNT that allows for the translations to be stored on Commons using shared data tables. This way, when you update a message on commons, all existing templates will automatically start using the new translation.
In T206067#4712896, @Halfak wrote:When you say "training output", are you referring to the trained model (an algorithm) or the predictions made by the trained model (new theoretical data).
We've been producing some datasets on my team. E.g. https://figshare.com/articles/Monthly_Wikipedia_article_quality_predictions/3859800 Is this the kind of thing that you have in mind?
In T208437#4711442, @PerfektesChaos wrote:Please keep in mind that there are not only Wikipedias in various language versions.
There is also Wikisource, Wiktionary, etc.
The implementation of one Infobox module serving all needs of all Wikipedias and Wiktionaries and Wikisources with a slight difference in human language only would surprise me.
In general, only basic functionality is a good candidate for a central module or library (libraries like mw.html or mw.ustring), but project and culture and language and community dependent issues are not.
If the concept of Infobox is addressed to Wikipedias only, what is the advantage to host Infobox/fr on Commons rather than on French Wikipedia? Who will be permitted to modify and adapt which module? On behalf of which community, and with effect on all projects of the same human language?
BTW, the libraries like mw.html permit global access to basic and native functions for a couple of years now. That path may be followed easily.
Oct 31 2018
In T206067#4709592, @Halfak wrote:I agree that we have a great opportunity in open access labeled data. @DarTar has been working to secure funding for development of our label gathering strategy and for hiring a program manager to manage outreach for labeling activities.
In T208421#4709428, @Aklapper wrote:Many ways exist, see See https://www.mediawiki.org/wiki/Phabricator/Help
In T206067#4708754, @Capankajsmilyo wrote:Trained Dataset open-sourcing can revolutionise the field of AI. Currently object-recognition, landmark-recognition, face-recognition, animal-recognition, plant-recognition, etc is specialisation of few big players because of their ownership on data. Wikipedia has huge database of such data in commons and if it open-source the trained Dataset, this could bring huge improvement and development in AI landscape worldwide. This could be a huge driver for innovation and modernisation for humanity as a whole.
Trained Dataset open-sourcing can revolutionise the field of AI. Currently object-recognition, landmark-recognition, face-recognition, animal-recognition, plant-recognition, etc is specialisation of few big players because of their ownership on data. Wikipedia has huge database of such data in commons and if it open-source the trained Dataset, this could bring huge improvement and development in AI landscape worldwide. This could be a huge driver for innovation and modernisation for humanity as a whole.
Oct 25 2018
May 7 2018
In T63958#3332798, @thiemowmde wrote:Relevant Patch-For-Review that adds a simple TimeFormatter that can output ISO-like YMD-ordered dates in all relevant precisions: https://github.com/DataValues/Time/pull/49. We might use this basic YMD-formatter instead of the current (DMY-) MwTimeIsoFormatter for the non-DMY languages listed above.
Is this task complete? Can someone please update this 3 year old task's status?