Change 649920 had a related patch set uploaded (by Zoranzoki21; owner: Zoranzoki21):
[mediawiki/extensions/PreferencesList@refs/meta/config] Add access for WikiTeq group
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Dec 17 2020
@tstarling Thank you! mediawiki/extensions/PreferencesList should be added also. List of members looks good.
Dec 16 2020
In T267213#6697149, @tstarling wrote:Done. Please review the access list to make sure I set it up correctly. For future reference, it would be more useful to have a list of Gerrit usernames rather than Phabricator usernames. Some people do not have their Phabricator account linked to LDAP.
Done. Please review the access list to make sure I set it up correctly. For future reference, it would be more useful to have a list of Gerrit usernames rather than Phabricator usernames. Some people do not have their Phabricator account linked to LDAP.
Dec 15 2020
In T267213#6669371, @ssastry wrote:FWIW, I've worked with @Pastakhov of WikiTeq during the porting of Parsoid from JS to PHP and found them competent and well-qualified.
In T267213#6691189, @Kizule wrote:Hi, is there any progress here?
Hi, is there any progress here?
Dec 8 2020
This will be resolved when T268326 is adopted.
Dec 4 2020
FWIW, I've worked with @Pastakhov of WikiTeq during the porting of Parsoid from JS to PHP and found them competent and well-qualified.
Dec 2 2020
I don't know if it matters, but I just want to add that I, too, think this group of developers are very well qualified and deserve to have their own Gerrit group.
I just stumbled upon the fact that we still have this API-before-APIs in core. Let's please just kill it. This task has been open since the before-time (it was imported from bugzilla in 2014).
In T268328#6637187, @Krinkle wrote:Among the additional stuff is a secondary index of https://raw.githubusercontent.com/MWStake/nonwmf-extensions/master/.gitmodules (ref) which is what you propose, but behind one extra anti-abuse step where MWStake has decided to list the repo. Whether to use this or some other mechanism, I think it's worth having something like that in place before feeding straight into Codesearch cloning, tracking and indexing the repo for the next 24 hours.
Abandoning this per @Jdforrester-WMF
Dec 1 2020
Thanks @daniel ! And I agree with everything in your comment. We aren't seeking +2 on mediawiki. We're just asking for a Gerrit group so that we can do a better job of maintaining these and other extensions. I believe that I personally have +2 on all of these extensions, but relying on me to review and +2 patches has turned out to be a bad idea. We're trying to improve our extension maintenance and having this group would be helpful.
Nov 30 2020
@daniel since this is now without an owner, I propose you decide on the wording yourself and close it out.
Change 644250 had a related patch set uploaded (by Daniel Kinzler; owner: Daniel Kinzler):
[mediawiki/core@master] Do not ignore self-conflicts.
Nov 29 2020
In T239543#6653904, @DannyS712 wrote:In T239543#6653872, @Kizule wrote:Opinion from the TechCom would be great.
Why is TechCom opinion needed? This has support from developers and just needs action from an admin, like @Legoktm, to merge https://gerrit.wikimedia.org/r/c/All-Projects/+/583133
Nov 28 2020
In T239543#6653872, @Kizule wrote:Opinion from the TechCom would be great.
Opinion from the TechCom would be great.
As I know, extensions maintained by us aren't deployed in WMF's production.
First off: I have worked with WikiTeq before, and they have been contracted by WMF in the past to do work on core and on extensions. I particularly know @Vedmaka to be a competent and diligent MediaWiki developer.
My mistake @Legoktm, no problem then.
Sorry, I wasn't clear, I meant comments from developers *outside* of WikiTeq.
@Legoktm Oh, I'm sorry. We didn't know that everyone should comment, I didn't see it written anywhere. But okay, I asked the others to comment.
In T267213#6635559, @Kizule wrote:Someone to do this?
Nov 26 2020
I'm abandoning this proposal. It's covered I think by the following:
- We now have a ParserCacheFactory (T263583), so we can cleanly store output generated in different ways or for different purposes (e.g. Parsoid or FlaggedRevs)
- Caching output for different slots separately would still be nice. Could be done by restructuring ParserCache, or introducing a CompositeParserOutput class. We investigated this a while back, but it wasn't prioritized at the time, ses T192817.
- We may still want to put other kinds of data besides ParserOutput into a ParserCache. With the serialization mechanism now using JSON, it should now be trivial to modify the class hierarchy around ParserCache to support this, see T268848.
Nov 25 2020
In T268328#6646276, @daniel wrote:In T268328#6645534, @Jdforrester-WMF wrote:Ok, so "extensions need to be either on gerrit or listed in the MWStake list" to be part of the "ecosystem" may be an option. That would provide a clear enough way to get one's extension indexed.
It's not "an option". It's the current reality.
An option for the definition of "ecosystem" in the policy. Which currently basically is "whatever is indexed in code search".
Nov 24 2020
In T268328#6645534, @Jdforrester-WMF wrote:Ok, so "extensions need to be either on gerrit or listed in the MWStake list" to be part of the "ecosystem" may be an option. That would provide a clear enough way to get one's extension indexed.
It's not "an option". It's the current reality.
In T268328#6644960, @daniel wrote:In T268328#6639842, @Nikerabbit wrote:Making a page on mediawiki.org feels like more effort than sending a pull request to be included on MWState's extension list. I can imagine arguments for requiring such a page, but being lightweight imho isn't one. We don't even have a nice form to create such a page. What we have is preload template which does not provide a good user experience.
Ok, so "extensions need to be either on gerrit or listed in the MWStake list" to be part of the "ecosystem" may be an option. That would provide a clear enough way to get one's extension indexed.
In T268328#6639842, @Nikerabbit wrote:Making a page on mediawiki.org feels like more effort than sending a pull request to be included on MWState's extension list. I can imagine arguments for requiring such a page, but being lightweight imho isn't one. We don't even have a nice form to create such a page. What we have is preload template which does not provide a good user experience.
In T268328#6640010, @Aklapper wrote:In T268328#6639104, @daniel wrote:if your extension has a page on mediawiki.org and is actively maintained, it gets indexed.
I'd say quite some of those thousands of pages are outdated, and there is no good criteria or even update process for extension release status.
Nov 23 2020
In T268328#6639104, @daniel wrote:if your extension has a page on mediawiki.org and is actively maintained, it gets indexed.
I'd say quite some of those thousands of pages are outdated, and there is no good criteria or even update process for extension release status.
In T268328#6639104, @daniel wrote:The main point however is that it should be clear how a new extension can get itself included in the index. My idea was to make this lightweight and automatic: if your extension has a page on mediawiki.org and is actively maintained, it gets indexed. If it appears that it's no longer actively maintained, it gets dropped from the category on the wiki, and thus from the index.
Nov 22 2020
In T268328#6637191, @Jdforrester-WMF wrote:What's the problem this is trying to solve?
Nov 21 2020
scrape the list of extensions, identify which are hosted in non-Gerrit repos, figure out those that are actively maintained, and propose those for inclusion to the MWStake repository.
Which might benefit from T237470: Create and maintain a list of organization repos that are maintained on Gerrit, GitHub, and Diffusion first...
Nov 20 2020
In addition to what James and Krinkle said, I would also add that the goal of codesearch is not to index any MediaWiki code ever written that was dumped into a git repo. If the results are filled with unmaintained stuff, then it's not useful for developers, which is already beginning to be a problem: T241320: Allow certain or all GitHub repositories to be excluded from search results.
(Also we shouldn't embed a specific tool like CodeSearch in a long-term policy if it might be scrapped within a year or so – T268196: Figure out the future of codesearch in a GitLab world.)
What's the problem this is trying to solve?
In T268328#6636960, @daniel wrote:[…] and then has a hard-coded list of additional stuff to index. I'm proposing to change this script so it scans the relevant categories on mediawiki.org instead. […]
In T268328#6636807, @Ladsgroup wrote:Codesearch searches for extensions and rebuild it on daily basis: https://gerrit.wikimedia.org/g/labs/codesearch/+/refs/changes/47/641847/1/write_config.py#35 Do you mean something larger?
Codesearch searches for extensions and rebuild it on daily basis: https://gerrit.wikimedia.org/g/labs/codesearch/+/refs/changes/47/641847/1/write_config.py#35 Do you mean something larger?
Nov 19 2020
Thank you for the suggestion. I propose to add the following wording to the policy:
Nov 17 2020
@daniel just wondering if you have an update?
Nov 16 2020
Change 640927 merged by jenkins-bot:
[mediawiki/core@master] Clean up PoolWorkArticleView
Nov 13 2020
Change 640927 had a related patch set uploaded (by Daniel Kinzler; owner: Daniel Kinzler):
[mediawiki/core@master] Clean up PoolWorkArticleView
Nov 8 2020
[Resetting assignee due to inactive user account]
Nov 5 2020
Nov 4 2020
Nov 3 2020
In T175745#6597841, @Tgr wrote:...I guess the overwrite is needed because the user's previous signature now resolves to a different timestamp?
Courtesy ping: @Krinkle @daniel @DannyS712 @Nikerabbit
Nov 2 2020
...I guess the overwrite is needed because the user's previous signature now resolves to a different timestamp?
When exactly bfcache gets used is browser-dependent, so I'm not sure it makes sense to declare to problem nonexistent just because Firefox (current market share: 4%) does not have it anymore.
Oct 29 2020
Oct 28 2020
I second @Mholloway as well. The reasons are both what @Demian said, and we're not going to risk naming collisions in the future.
Oct 21 2020
In T239742#6569815, @Mholloway wrote:Personally I quite strongly favor org-namespacing
I concur. The simple information that a package is created by the WMF conveys a lot of helpful context: where to look for documentation, support, source code (without visiting npm.org), in what domain the package will be applicable, what problems it helps to solve etc. It makes managing packages easier (for ex. related packages are grouped together, this makes reading package.json easier, searching just wikimedia packages in node_modules easy, etc.) and generally it is a best practice.
Personally I quite strongly favor org-namespacing, but I was surprised to find that others disagreed, so opened this ticket for discussion. If FSG want to keep it open for discussion and eventual adoption of a convention, that's fine with me, otherwise we can just close it as declined and let the status quo be.
I don't think we need a strong policy on this per-se, a coding convention as we do with other naming would suffice. If I look at other orgs, there is quite often a mixture of some things being namespaced and some things not.
Example of a package we have on npm that's perhaps oddly not prefixed or namespaced: https://www.npmjs.com/package/api-testing
Oct 16 2020
Oct 15 2020
Thanks for bringing this up! I'm working on an update to the stable interface policy. I added a section on traits: https://www.mediawiki.org/wiki/User:DKinzler_(WMF)/Stable_interface_policy
Oct 14 2020
Oct 9 2020
Oct 6 2020
In a nut shell:
Oct 2 2020
In T264334#6510853, @Catrope wrote:Another purpose we use the manifest for is dependency resolution. We could move that server-side, which could result in some modules being transmitted twice, but that's probably not too bad (because most page views only make one or two load.php requests). At minimum, if we got rid of versions, we could drop from the manifest all modules that don't have dependencies (and aren't depended on).
Oct 1 2020
Hmm, that's an interesting idea. I'll mull it over and discuss it with @Krinkle. It's probably worth measuring how often such a global version string would change, compared to any individual module's version (for example, some modules are backed by wiki pages that admins can edit, so those can change more often than just the weekly deployment). We may be able to tackle that by excluding wiki page-based modules from the global version (and version them individually instead), or by having a "default version" that applies to most modules except those changed after the initial deployment.
@Catrope Thanks for that explanation, that is really helpful.
The reason the client has to have the module registry is because of our caching strategy. It needs the version hashes of each module in order to be able to compute the correct URL when requesting modules.
Sep 30 2020
Sep 28 2020
Training as question for the committee. Might not need it's own RFC if it's only a clarifying question based on wording.
With T263583: Introduce a ParserCacheFactory coming up, perhaps we should use a special ParserCache instance for old revisions, configured with a low TTL an a cluster-local cache. That seems simple enough. The tricky bit is the "should we use the parser cache" logic, which needs to be come "should we use the parser cache, and if yes, which one". We have this logic in at least two places, probably more. Would be nice to better encapsulate this.
Sep 24 2020
Looking into this some more, we came across a number of issues, namely:
- Diffs and permalinks don't share a code path for getting the ParserOutput for the old revision. DifferenceEnginer::getParserOutput uses WikiPage::getParserOutput, but Article::view() does its own thing.
- Sharing code between page views and diff views would be nice. Do we want a PageOutputRender service?
- We would probably want to apply short term caching for old revisions pretty much in the same places we are applying long-term caching for the current revision, via ParserCache.
- We would need to split the short term cache in the same way we split the ParserCache. So should this logic be *in* ParserCache? Or do we pull the key generation logic out of ParserCache?
- The ParserCache gets written by PoolWorkArticleView, but it gets read by different bits of code. This asymmetry makes it hard to add to the logic.
- Should the cache be shared across data centers, like the diff cache?