How am I supposed to test its impact on Wikibase before merging? How can I run Wikibase tests on a patch I'm submitting to core, if they aren't run automatically?
Sep 1 2019
- void return value for functions, for the same reason we want to type-hint generally
- Convert list() to  just like we converted array()
- Specify visibility of class constants (some are only used internally and should be made protected or private)
It looks like soon we're probably going to require PHP 7.2, so features from 7.1 and 7.2 should be considered as well. These pages are useful:
It is not part of the contract of a cache to cache something, insofar as caches by definition are for things that can be regenerated, and therefore are not designed to guarantee persistency.
Is this actually causing problems? MovePage did have it because destruct would actually get called, but now it seems it doesn't. Why would destruct actually get called on the mock? It seems it doesn't really get called every time an object is destroyed.
Aug 30 2019
For the overloading issue, is there any option of just getting more resources? Maybe using EC2 or similar? We're trying to add lots more tests, and one way or another, we're going to need more resources to run them.
ChangesListSpecialPage is not validating the parameters. It's assuming that the namespace parameter in the query string is a semicolon-delimited list of numbers. The URL here is invalid, and has garbage after the namespace parameter:
This will be parsed to int(1), which is probably correct, but it issues a notice.
Again, there are two concerns: reducing latency, and reducing server load. The two are only related if the servers are actually overloaded and therefore individual jobs take longer. What exactly is our concern here? Latency, server load, or both?
Aug 29 2019
It seems that what you're suggesting is that instead of defining a wrapper class that you can use if you want to mock the file access, each class should define separate methods on an ad hoc basis to wrap the file access. Wouldn't it be preferable to define the wrappers only once?
It's been seven days, and seemingly no objections. It would be useful for me to have +2 rights now just to re-submit patches I've written that failed gate-and-submit due to flaky tests or non-substantive reasons ("HHVM doesn't like syntax X"). I seem to have a lot of those, and right now I have to wait for someone else to deal with it.
Since the point of a cache is to save disk reads, I think we would want to directly test that it doesn't do unnecessary disk risks. Otherwise the cache isn't doing its job. Just injecting the directory doesn't give you any way to directly test that the cache is saving disk reads, i.e., that it's behaving usefully.
If you want to split things up, split up PHPUnit tests themselves into multiple chunks. That will allow reducing latencies more or less arbitrarily, assuming we have adequate server resources and we don't spend too much time in setting up the test instance.
Why did phan not catch this?
Aug 28 2019
For that matter, testing anything in that method requires making assumptions about the contents of files in the filesystem and perhaps copy-pasting something about their contents. This doesn't seem like the best approach. The correctness of Language::getGrammarTransformations() shouldn't have to depend on the data that we happen to ship.
So here's an example: Language::getGrammarTransformations() will throw an exception if it finds invalid JSON in a grammar transformations file. The file path is hardcoded to __DIR__ . "/data/grammarTransformations/$languageCode.json" and accessed with is_readable() and file_get_contents(). How are we supposed to test that the exception is thrown properly?
If we wanted to make this feature work properly, I think the right way would be to parse the HTML page content for links, find all links to pages on the current wiki, run a DB query to see how big they are, and add a suitable class to the parsed HTML. This would not affect caching; would have no performance impact on anyone who has the feature disabled; and for whoever has the feature enabled, will add one HTML parse/serialize and one (small, indexed) database query.
Maybe it doesn't really need to be mocked, then. I'll reopen if I find a case where it really does.
Aug 27 2019
I think the best returns here are going to be not on pinpointing specific tests, but looking for systemic issues. Even if you find a test that takes ten seconds, optimizing it will save at most ten seconds, and there aren't enough such tests to add up to the target savings of 10+ minutes.
Aug 26 2019
So let's say for LBFactory, how would you suggest making it not depend at all on the current request? How should it get the request's IP address and User-Agent so that ChronologyProtector works?
By the way, @Ladsgroup, how did you actually measure the performance hit here? I'm going to need a way to test it to make sure I've fixed the problem before re-committing.
Few services will actually want direct access to the request info, so I'd think it doesn't make sense to make a service for this without specific use-cases that don't make more sense to deal with by other means. For instance, LBFactory needs only a few specific pieces of information from the request, so it doesn't seem necessary to make a whole service to pass in for it.
This should be fixed now, can you test?
So those two patches should fix the two issues that you ran into. I leave it up to the people with +2 access to decide whether it's wise to just commit these two fixes or to revert LocalisationCache and LanguageNameUtils for now.
The breakage of rebuildLocalisationCache.php is an entirely separate problem. Separate patch coming for that. See, this is why tests before refactoring is a good idea!
I think I see the problem:
You need to revert LanguageNameUtils too, then it works fine (other than RELEASE-NOTES conflicts, obviously): 2e52f48c2ed
Aug 25 2019
We definitely should have a test that checks for this. Any existing cases can be whitelisted for the time being, but it could at least stop new cases from being inadvertently added.
WikibaseLexeme tests were accessing a private property of Language via ReflectionClass in order to clear Language caches, instead of using the public method Language::clearCaches(). When Language got refactored, the private member was removed, which broke the tests. Tests that are not run by CI cannot rely on private things not changing if they want to not break -- they need to use only public interfaces.
Aug 23 2019
For the record, @coversDefaultClass seems not to work properly on HHVM either, although on PHP I didn't get any errors. So maybe the sniff is a good idea, at least for now.
Aug 22 2019
The question is, did this work before, or crash with a stack overflow? Do we log all fatal errors in such a way that we know these page views actually succeeded before this change, or might the error just have changed?
I should add that lately I've been contracting for Wikimedia for about seven weeks a year, and have been working 30-40 hours a week during those periods. When not contracting, I'm completely inactive.
My older contributions are available via git log --author simetrical. I couldn't find a webpage that would display the results of the search that I could link to. Note that that includes things from the SVN era that I only committed and didn't author, so it's inflated (probably only slightly). But if we leave that aside, according to git shortlog -sne, I have 1052 commits (after updating .mailmap to merge the three different e-mail addresses I've used). Surprisingly, this still makes me the #19 all-time contributor by number of commits. I think I always tended to break up my commits more than a lot of people, though.
Aug 21 2019
In general, false instead of a string ID, or null instead of a SiteID object, are considered to represent the current wiki (home wiki).
Aug 19 2019
I don't see a way to mark as duplicate?
If we wanted a simpler change that would solve the immediate problem, I suggest just adding new release-notes lines in separate files in a designated directory. Before a release, a simple script could aggregate them into a traditional RELEASE-NOTES file. E.g., instead of adding * MediaWikiServices is now deprecated to the 'Deprecations in 1.x' section of RELEASE-NOTES, I'd put a file in release-notes/deprecations named whatever I feel like (name will be ignored by the script) with my release-notes line. Then to accumulate all the deprecations before making a release could be as simple as cat $(ls -tr release-notes/deprecations) or such.
Aug 18 2019
Actually, @covers might not actually work on trait-level comments (PHPUnit bug?).
Also @covers and presumably other things.
Aug 15 2019
Okay, this took way more time than I expected. I'm putting it on hold for now until I get feedback on the general approach.
Sorry about that. If we want every task to have at least one project tag, is there a way to make it mandatory, or at least warn the submitter if they don't provide one? Other issue trackers I've used all require picking a product/component/whatever, and I keep forgetting with this one.
The idea being, then, that we just don't want to waste resources fixing 7.0-specific problems when it's already been EOL'd? That makes perfect sense, I just got confused because of my own assumptions about why we'd want to drop support for old PHP. (There are plenty of 7.1/7.2 features I'd love to use!)
So I should probably have waited to get feedback from someone before writing up a test implementation. They would probably have pointed out to me that existing hooks are in some cases run before services are functional. This complicates things a bit, but I think the idea is still something we need to implement.
I don't see anywhere in Krinkle's text that explains the benefits of dropping support for older PHP versions. I assumed it was so we'd be able to use features of newer PHP versions in core code (because that's why I want to require a newer version :) ). If this is the case, I'm interested in knowing what new features we could use while still supporting HHVM. If there's some other reason entirely that we want to require a higher version, that explains my confusion.
Aug 14 2019
If it's not really useful to require 7.2 before dropping HHVM support, is it worth making third-party admins upgrade their PHP? I don't know how many there are who will need to, but it's presumably nonzero, so shouldn't we identify some clear benefit to us before inconveniencing them?
Aug 13 2019
How does this relate to dropping HHVM support? Is it useful to require PHP 7.2 but also support HHVM? Should T192166 block this, and does it have a timeline?
Aug 12 2019
This is fixed now, right? Yay! \o/
May 6 2019
May 5 2019
So this is an actual bug -- until now, we were presumably creating a bogus link (to https://en.wiktionary.org/wiki/User_talk:). What should we actually do here? Is the database actually supposed to ever have empty usernames in it? That doesn't seem reasonable. Surely conversion scripts should also have usernames?
May 2 2019
@daniel @tstarling Does it make sense to you that I should work on this? I have most of the work done already for my own productivity, it's just a question of putting it in MediaWiki and figuring out how to let people install it (and decide if we want to auto-install it).
May 1 2019
I think it's best to not allow constructing RevisionStores on foreign repos until we have a clear-cut use-case. Then we can add the functionality to support that use-case. It's not obvious to me that global user pages and such are best implemented using RevisionStore on the local wiki -- somebody needs to come up with an actual design that they plan to implement before that can be decided.
Apr 28 2019
That caveat should be removed by https://gerrit.wikimedia.org/r/504336 when that lands.
Apr 17 2019
If the pieces of information are completely unrelated, why not expose them separately, like MediaWikiServices::getInstance()->getUserIdentity() and so on? What do we gain by bundling them in one unit? There are certainly tons of places that use $wgUser or RequestContext::getMain()->getUser() without having any need for a WebRequest, for instance.
- Like what?
Apr 16 2019
Why not inject the WebRequest directly instead of putting it inside a wrapper that includes stuff about the user, language, and skin? I.e., what's wrong with just MediaWikiServices::getWebRequest()?
Apr 15 2019
There is no use of WebRequest in PermissionManager right now. The only use of RequestContext is to pass it straight along to Action, which exposes it to subclasses that could conceivably be in extensions. Could you provide more details on what you want to use it for?
So moveTo() does a couple of extra things that MovePage doesn't. It calls spreadAnyEditBlock() if there's an error, and honors the suppressredirect permission. Why are these not done in MovePage proper? As it stands, it considerably complicates porting the existing users, which are fairly numerous.
It doesn't look like Linker could make any use of a FileIdentity type. It seems to basically use File for methods that wouldn't be in such a type, like: exists(), getUrl(), allowInlineDisplay(), isVectorized(), getWidth(), mustRender(), transform().
So it's very easy to write the class, but I can't figure out where to use it. Most RequestContext users seem to use it for config and messages and such, inter alia, so there aren't many places I can see where it makes a lot of sense to just substitute RequestInfo. It seems like you'd want to start from classes like LogFormatter and Message, and the first step there should probably be to make them be constructed by services.
What exactly should this new service contain?
Apr 10 2019
Apr 9 2019
It looks like this was fixed by 8e1342ed470a08d6658e2483976dfdc8f4388ea5.
Apr 8 2019
@Legoktm Why do we want variables like $IP to only be ignored if whitelisted on a per-project basis? Any extension could potentially want to use variables like this. Should every extension have to whitelist them separately?