Fri, Jan 12
Tue, Jan 9
Wed, Jan 3
Mon, Jan 1
The thing that has been approved is in the task description section labelled "Current proposal". Some time before April 2018, we will migrate remaining uses of PHP 5.x in WMF production to either PHP 7 or HHVM. Then, we will update the version requirements in MW core master, before the release of MW 1.31.
Dec 13 2017
This is now moving to last call after a TC discussion.
Dec 12 2017
One alternative is to use the reference form of foreach:
Dec 8 2017
Incredible how a single line of rubbish code I wrote in 2004 can have so many people scratching their heads for so long. I hadn't seen this task before today.
The fix is merged, and searching logstash for SiteConfiguration shows no further errors of this type.
Dec 7 2017
Dec 4 2017
Nov 30 2017
Nov 29 2017
ll_lang is actually the interwiki prefix, it's written by LinksUpdate based on what's currently in the wikitext on the page in question, you can't just change it in the database. You'd have to change it on every page with an explicit (non-Wikidata) language link to nowiki.
Nov 27 2017
Nov 23 2017
Nov 22 2017
Nov 17 2017
Nov 14 2017
In puppet, the following things require php5:
If the minimum will be "either PHP 7 or HHVM" then we need to stop using PHP 5.6 in production. Hence the subtask I just added. I'm changing the title back to MW 1.31 since that is what @Anomie is proposing. It will have to be done prior to migration to stretch if we are going to keep to that timeline.
This was fixed by @Krenair by just not using getopt() anymore, which seems good enough to me. I confirmed that we're not using it, except in tests.
Nov 10 2017
It helps a little bit, but you still don't get multiversion. Maybe we should use auto_prepend_file.
Nov 8 2017
Nov 3 2017
Oct 17 2017
Oct 9 2017
Oct 4 2017
Sep 26 2017
There is research-client.cnf, accessible from the researchers group, and stats-research-client.cnf, which is identical, but accessible from the stats group. There's no such file as analytics-research-client.cnf, I updated the docs in one place where I saw that filename. I see in puppet/modules/admin/data/data.yaml that researchers is the group usually used for this, I don't see any stats group. So, please give Cindy SSH access in the researchers group.
Sep 21 2017
I don't think it's a duplicate, we could theoretically do both. But like Max says, there's not really a rationale for it anymore.
Sep 20 2017
Declining due to T176370: Migrate to PHP 7 in WMF production
If you just want an approximately PCRE-like syntax, you could just translate the regex to a Lua pattern. Scribunto has equivalent code going in the other direction, in Scribunto_LuaUstringLibrary::patternToRegex(), which you could look at for ideas. Obviously you would be implementing a subset of PCRE features.
Sep 19 2017
@Legoktm is taking this on next quarter.
Applied that patch in https://gerrit.wikimedia.org/r/#/c/378818/ , deployed.
Sep 18 2017
Sep 5 2017
Sep 1 2017
I reverted that patch on account of it having a serious error in it, as described in T174639, and there was no response from the developer after 1 day.
Reverted, cherry picked and deployed
Aug 31 2017
Aug 30 2017
Aug 28 2017
MariaDB [mw]> \s -------------- mysql Ver 15.1 Distrib 5.5.36-MariaDB, for debian-linux-gnu (x86_64) using readline 5.1
Aug 24 2017
Aug 22 2017
Aug 18 2017
I moved ChangesFeed from includes/changes to the Feed namespace because I think it was missorted. It doesn't call anything else in the changes directory, and nothing in the changes directory calls it. It's integrated with ChannelFeed from Feed.php.
The status is just what I wrote in the task description of T171267, I haven't done any more work on it since then, except for merging another change into master. I built a package for trusty, but not for jessie. Building a package for jessie will require at least updating the control file. I haven't tested it in deployment-prep.
Aug 17 2017
Aug 15 2017
Looking at the number of classes per namespace, including the changes above, two namespaces stand out as being underpopulated:
Aug 11 2017
I've started to work on the core alias map. A few questions:
Aug 10 2017
Aug 7 2017
Aug 4 2017
Aug 2 2017
I wrote about my concerns about the deduplication scheme at T153333#3491632 , since that's where most of the discussion on that topic was. Apologies for the late review, but seeing the code has made the problems clearer to me.
I'm very skeptical about deduplication via the comment_text(100) index, for the following reasons:
Jul 27 2017
Note that legal review is complete now, there doesn't seem to be any blocker.
Jul 26 2017
Note that the whole wikidata request rate spike only reduced disk free space from 11% to 9% -- so by deleting the relevant rows, we might expect a similar 2% increase in free space. It's not the main culprit for increased disk space usage in the long term, that award apparently goes to wrapclass and responsiveimages, which are cumulatively responsible for ~38% of rows.
The cache miss spike I mentioned earlier was apparently due to wikidata: 75% of the cache entries written with the relevant expiry time were for wikidatawiki, whereas that wiki is normally a very small percentage. Of those cache entries, 99% had options "!canonical!wb3", whereas normally wikidata cache options are highly fragmented. $wgCacheEpoch was updated for wikidatawiki, about 8 hours before the start of the spike (https://gerrit.wikimedia.org/r/#/c/367391/ for T170668). It's possible that someone ran a bot or crawler to fetch a lot of wikidatawiki pages, either coincidentally after the cache epoch bump, or in an attempt to fix a related problem.
I had a look at cache-fragmenting parser options on pc1004 parsercache.pc001.
I added disk free space, and the derivative of disk free space, to the parser cache dashboard in Grafana: https://grafana.wikimedia.org/dashboard/db/parser-cache?refresh=5m&orgId=1&from=1500783303871&to=1501027037629
Jul 25 2017
Jul 24 2017
Jul 23 2017
Jul 21 2017
The key, and error message in English, were done in February 2014. The pool type was added in April 2014. The URL is included in the monolog stream. So I guess it's just the PC server that is missing? That would be easy enough for errors that originate in the extension, like connect failures. For errors that come from the core, like pool overflow, the PC server is more difficult to determine, and less likely to be relevant.
MW already provides a log of all PoolCounter errors, including queue overflow, in the poolcounter channel. So this is presumably just a matter of monitoring configuration, which my team is not very familiar with.