Apr 22 2020
Oct 30 2019
Oct 29 2019
What type of errors? Keeping in mind, again, that it only affects type hints on functions, not anything else.
Also, as we start using scalar type hints, we should require declare( strict_types = 1 ) for all files. This will not affect backward compatibility, because it only affects calls to functions with strict type hints. If it's not enabled, PHP will silently cast other scalar types when you call the function, which is surprising. (I'm posting this because I was just bitten by it -- although in my case strict_types wouldn't have helped, because the call to the function was from within PHPUnit.)
See also the PHP bug: https://bugs.php.net/bug.php?id=65119
The problem of finals is still interfering with testing work. (This time it's unit tests for FileBackend, where I can't test that it's properly forwarding to LockManager because the LockManager methods are marked final.)
In addition to the advantages of type hints that have already been mentioned, I'll point out that the language is moving toward more and better type hints. PHP 7.4 has typed class properties, and the next version looks like it will have union types, probably followed by a "mixed" type.
Oct 28 2019
Remaining non-deprecated public static methods on Language:
Oct 27 2019
As far as I can tell, the issue is we need a way for IP address and cookies to be available for logging and security checks. We obviously are not going to pass these all the way down the call chain. This seems like exactly the use-case a service manager is intended for. Why not just make a service that contains the needed info? Generally we can just inject the info from the service, but if some code wants to perform certain actions that should be considered relevant to a different IP address/set of cookies, it can inject different info. If something is being performed without relevance to any specific IP address, like a maintenance script run via a cron job, it can use 127.0.0.1 as the IP address (which I imagine is the info PHP supplies anyway). And of course empty cookies.
Oct 25 2019
Nope, config was fine, it was a filtering proxy. The proxy had been configured to allow SSH to the old IP, but needed to be updated to allow the new IP. I should always remember to check for that first before bothering anyone else about issues that could be network-related, but sometimes I forget. :/
Sorry about that, it was a problem on my end after all. All fixed now.
Oct 24 2019
Coverage for includes/libs/lockmanager: 16.64% of lines (116/697)
Oct 23 2019
Coverage for includes/filebackend/: 41.18% of lines (119/289)
It runs them a while after the patch is already merged, or what? It doesn't seem to run them every time a new changeset is submitted to Gerrit, nor in gate-and-submit, nor in the post-merge run that Jenkins reports. Those are the test runs I see in practice.
Pushing over HTTPS works for the time being as a workaround.
It occurs to me that we need to define rules for changes in stability similar to rules for deprecation and removal. If I want to mark a public interface as internal when it wasn't officially marked as such in the past, does that need a mention in release notes? If so, where -- the deprecation section or somewhere else? Are there any rules for how soon I can remove a method (or otherwise break callers) after declaring it unstable? Is there going to be any equivalent of hard deprecation for declaring things unstable?
Oct 22 2019
How am I supposed to test its impact on Wikibase before merging? How can I run Wikibase tests on a patch I'm submitting to core, if they aren't run automatically?
Sep 1 2019
- void return value for functions, for the same reason we want to type-hint generally
- Convert list() to  just like we converted array()
- Specify visibility of class constants (some are only used internally and should be made protected or private)
It looks like soon we're probably going to require PHP 7.2, so features from 7.1 and 7.2 should be considered as well. These pages are useful:
It is not part of the contract of a cache to cache something, insofar as caches by definition are for things that can be regenerated, and therefore are not designed to guarantee persistency.
Is this actually causing problems? MovePage did have it because destruct would actually get called, but now it seems it doesn't. Why would destruct actually get called on the mock? It seems it doesn't really get called every time an object is destroyed.
Aug 30 2019
For the overloading issue, is there any option of just getting more resources? Maybe using EC2 or similar? We're trying to add lots more tests, and one way or another, we're going to need more resources to run them.
ChangesListSpecialPage is not validating the parameters. It's assuming that the namespace parameter in the query string is a semicolon-delimited list of numbers. The URL here is invalid, and has garbage after the namespace parameter:
This will be parsed to int(1), which is probably correct, but it issues a notice.
Again, there are two concerns: reducing latency, and reducing server load. The two are only related if the servers are actually overloaded and therefore individual jobs take longer. What exactly is our concern here? Latency, server load, or both?
Aug 29 2019
It seems that what you're suggesting is that instead of defining a wrapper class that you can use if you want to mock the file access, each class should define separate methods on an ad hoc basis to wrap the file access. Wouldn't it be preferable to define the wrappers only once?
It's been seven days, and seemingly no objections. It would be useful for me to have +2 rights now just to re-submit patches I've written that failed gate-and-submit due to flaky tests or non-substantive reasons ("HHVM doesn't like syntax X"). I seem to have a lot of those, and right now I have to wait for someone else to deal with it.
Since the point of a cache is to save disk reads, I think we would want to directly test that it doesn't do unnecessary disk risks. Otherwise the cache isn't doing its job. Just injecting the directory doesn't give you any way to directly test that the cache is saving disk reads, i.e., that it's behaving usefully.
If you want to split things up, split up PHPUnit tests themselves into multiple chunks. That will allow reducing latencies more or less arbitrarily, assuming we have adequate server resources and we don't spend too much time in setting up the test instance.
Why did phan not catch this?
Aug 28 2019
For that matter, testing anything in that method requires making assumptions about the contents of files in the filesystem and perhaps copy-pasting something about their contents. This doesn't seem like the best approach. The correctness of Language::getGrammarTransformations() shouldn't have to depend on the data that we happen to ship.
So here's an example: Language::getGrammarTransformations() will throw an exception if it finds invalid JSON in a grammar transformations file. The file path is hardcoded to __DIR__ . "/data/grammarTransformations/$languageCode.json" and accessed with is_readable() and file_get_contents(). How are we supposed to test that the exception is thrown properly?
If we wanted to make this feature work properly, I think the right way would be to parse the HTML page content for links, find all links to pages on the current wiki, run a DB query to see how big they are, and add a suitable class to the parsed HTML. This would not affect caching; would have no performance impact on anyone who has the feature disabled; and for whoever has the feature enabled, will add one HTML parse/serialize and one (small, indexed) database query.
Maybe it doesn't really need to be mocked, then. I'll reopen if I find a case where it really does.
Aug 27 2019
I think the best returns here are going to be not on pinpointing specific tests, but looking for systemic issues. Even if you find a test that takes ten seconds, optimizing it will save at most ten seconds, and there aren't enough such tests to add up to the target savings of 10+ minutes.
Aug 26 2019
So let's say for LBFactory, how would you suggest making it not depend at all on the current request? How should it get the request's IP address and User-Agent so that ChronologyProtector works?
By the way, @Ladsgroup, how did you actually measure the performance hit here? I'm going to need a way to test it to make sure I've fixed the problem before re-committing.
Few services will actually want direct access to the request info, so I'd think it doesn't make sense to make a service for this without specific use-cases that don't make more sense to deal with by other means. For instance, LBFactory needs only a few specific pieces of information from the request, so it doesn't seem necessary to make a whole service to pass in for it.
This should be fixed now, can you test?
So those two patches should fix the two issues that you ran into. I leave it up to the people with +2 access to decide whether it's wise to just commit these two fixes or to revert LocalisationCache and LanguageNameUtils for now.
The breakage of rebuildLocalisationCache.php is an entirely separate problem. Separate patch coming for that. See, this is why tests before refactoring is a good idea!
I think I see the problem:
You need to revert LanguageNameUtils too, then it works fine (other than RELEASE-NOTES conflicts, obviously): 2e52f48c2ed
Aug 25 2019
We definitely should have a test that checks for this. Any existing cases can be whitelisted for the time being, but it could at least stop new cases from being inadvertently added.
WikibaseLexeme tests were accessing a private property of Language via ReflectionClass in order to clear Language caches, instead of using the public method Language::clearCaches(). When Language got refactored, the private member was removed, which broke the tests. Tests that are not run by CI cannot rely on private things not changing if they want to not break -- they need to use only public interfaces.
Aug 23 2019
For the record, @coversDefaultClass seems not to work properly on HHVM either, although on PHP I didn't get any errors. So maybe the sniff is a good idea, at least for now.
Aug 22 2019
The question is, did this work before, or crash with a stack overflow? Do we log all fatal errors in such a way that we know these page views actually succeeded before this change, or might the error just have changed?
I should add that lately I've been contracting for Wikimedia for about seven weeks a year, and have been working 30-40 hours a week during those periods. When not contracting, I'm completely inactive.
My older contributions are available via git log --author simetrical. I couldn't find a webpage that would display the results of the search that I could link to. Note that that includes things from the SVN era that I only committed and didn't author, so it's inflated (probably only slightly). But if we leave that aside, according to git shortlog -sne, I have 1052 commits (after updating .mailmap to merge the three different e-mail addresses I've used). Surprisingly, this still makes me the #19 all-time contributor by number of commits. I think I always tended to break up my commits more than a lot of people, though.
Aug 21 2019
In general, false instead of a string ID, or null instead of a SiteID object, are considered to represent the current wiki (home wiki).
Aug 19 2019
I don't see a way to mark as duplicate?
If we wanted a simpler change that would solve the immediate problem, I suggest just adding new release-notes lines in separate files in a designated directory. Before a release, a simple script could aggregate them into a traditional RELEASE-NOTES file. E.g., instead of adding * MediaWikiServices is now deprecated to the 'Deprecations in 1.x' section of RELEASE-NOTES, I'd put a file in release-notes/deprecations named whatever I feel like (name will be ignored by the script) with my release-notes line. Then to accumulate all the deprecations before making a release could be as simple as cat $(ls -tr release-notes/deprecations) or such.
Aug 18 2019
Actually, @covers might not actually work on trait-level comments (PHPUnit bug?).
Also @covers and presumably other things.
Aug 15 2019
Okay, this took way more time than I expected. I'm putting it on hold for now until I get feedback on the general approach.
Sorry about that. If we want every task to have at least one project tag, is there a way to make it mandatory, or at least warn the submitter if they don't provide one? Other issue trackers I've used all require picking a product/component/whatever, and I keep forgetting with this one.
The idea being, then, that we just don't want to waste resources fixing 7.0-specific problems when it's already been EOL'd? That makes perfect sense, I just got confused because of my own assumptions about why we'd want to drop support for old PHP. (There are plenty of 7.1/7.2 features I'd love to use!)
So I should probably have waited to get feedback from someone before writing up a test implementation. They would probably have pointed out to me that existing hooks are in some cases run before services are functional. This complicates things a bit, but I think the idea is still something we need to implement.
I don't see anywhere in Krinkle's text that explains the benefits of dropping support for older PHP versions. I assumed it was so we'd be able to use features of newer PHP versions in core code (because that's why I want to require a newer version :) ). If this is the case, I'm interested in knowing what new features we could use while still supporting HHVM. If there's some other reason entirely that we want to require a higher version, that explains my confusion.
Aug 14 2019
If it's not really useful to require 7.2 before dropping HHVM support, is it worth making third-party admins upgrade their PHP? I don't know how many there are who will need to, but it's presumably nonzero, so shouldn't we identify some clear benefit to us before inconveniencing them?
Aug 13 2019
How does this relate to dropping HHVM support? Is it useful to require PHP 7.2 but also support HHVM? Should T192166 block this, and does it have a timeline?
Aug 12 2019
This is fixed now, right? Yay! \o/
May 6 2019
May 5 2019
So this is an actual bug -- until now, we were presumably creating a bogus link (to https://en.wiktionary.org/wiki/User_talk:). What should we actually do here? Is the database actually supposed to ever have empty usernames in it? That doesn't seem reasonable. Surely conversion scripts should also have usernames?