Located in Germany: CET (UTC+1)/CEST (UTC+2)
GPG: 84E7 0489 0F69 0544 0A59 86D8 C485 27CF 7D40 8CDE
I explained in T191182#4103647 why I think this is wrong. So I am reopening this task. If Differential has no future in Wikimedia development then projects should get migrated and it should be turned off. Leaving yet another lingering system around just adds to technical debt, confusion, fragmentation.
Would turning off Differential loose the content it already has and break links to it?
AFAIK this used be a way to disable abusive accounts (by removing this right they could no longer log in, which normal blocking on wikitech didn't do iirc). Did keystone roles take that functionality over? Even if rarely used, there should be a easy way to disable abusive accounts on wmcs.
Ping @Catrope - you created that instance two months ago.
At a short look: GitInfo::getRemoteUrl is used to determine the remote url, which just reads the git config file. The config file will have a different value for the remote (with or without .git) based on whether mediawiki/core.git or mediawiki/core was checked out, although there's no functionally difference in that as far as I know.
Woohoo! Thanks Daniel and Tyler! :)
Per T188367, crons are preferred over jenkins.
See also T188367, which is about going the other way round (jenkins->cron) for mediawiki, and my comment there.
I wonder how this relates to T73305, which is about migrating away from a cron to jenkins for the puppet repo. Although these repositories are independent, it seems to me that the upsides/downsides on that other task still apply here. As I understand it the gist of these two tasks is that
According to openstack browser this uses ::standalone as of now. Seems the ony thing left to do here is a more general change in how we handle puppet at the beta cluster (for example heira). That's tracked at T161675.
Please provice a bit more information on what you were doing, which wiki in beta you specifically used, what part of the software you're refering to or what the 'edit layer' option is. See also https://www.mediawiki.org/wiki/How_to_report_a_bug
en_rtl is in the table of projects. Under "other projects" there's en-rtl. Probably all that's needed is to swap the underscore with a dash in the langlist-labs file at the top of the mediawiki/config repo. Thus tagging as Easy.
It seems you used the same flavor for deploy1001 that tin had. This would've been a great time to switch to a different flavor (with bigger disk) and resolve T166492. This comment is probably late for tin->deploy1001, but maybe not for mira->deploy2001 (or whatever it will be called).
Apparently the reason was just "unused": https://tools.wmflabs.org/sal/log/AWDgixgYwg13V6286cnS
Per the link in the task description labs-puppetmaster/Labs Puppetmaster HTTPS is OK since 19m 45s
These are two out of four remaining trusty instances in deployment-prep, and continuously failing puppet for months. I wonder whether they still serve any purpose or should just be deleted - and if they are meant to stay, who would be responsible for upgrading them/resolving the puppet errors.
All the hosts that are failing puppet in deployment-prep (as seen on shinken.wmflabs.org) look familiar and seem to have their own tasks. So I think we can consider this done.
DNS entries instance-deployment-secureredirexperiment.deployment-prep.wmflabs.org. and *.secureredirtest.wmflabs.org. can probably go away as well?
Will WMF be able to run (full) PHP 7.1 any time soon? (Note: I am not asking about bumping MW to 7.1, just when WMF will be able to run it.)
(and also it should've been being used for things like SSH host key gathering, what happened to that?)
why on earth is that host named 1001? it doesn't make sense to use that convention in labs which is eqiad-only
Where/how does the ConfirmEdit extension currently set how "strong" a captcha should be?
20:29 <shinken-wm> RECOVERY - Puppet errors on deployment-db03 is OK: OK: Less than 1.00% above the threshold [0.0] 20:47 <shinken-wm> RECOVERY - Puppet errors on deployment-elastic07 is OK: OK: Less than 1.00% above the threshold [0.0] 20:50 <shinken-wm> RECOVERY - Puppet errors on deployment-ircd is OK: OK: Less than 1.00% above the threshold [0.0] 20:50 <shinken-wm> RECOVERY - Puppet errors on deployment-logstash2 is OK: OK: Less than 1.00% above the threshold [0.0] 20:51 <shinken-wm> RECOVERY - Puppet errors on deployment-mathoid is OK: OK: Less than 1.00% above the threshold [0.0] 20:52 <shinken-wm> RECOVERY - Puppet errors on deployment-tin is OK: OK: Less than 1.00% above the threshold [0.0] 20:53 <shinken-wm> RECOVERY - Puppet errors on deployment-prometheus01 is OK: OK: Less than 1.00% above the threshold [0.0] 20:53 <shinken-wm> RECOVERY - Puppet errors on deployment-ms-fe02 is OK: OK: Less than 1.00% above the threshold [0.0]
Each column in the "user" table is a row in the "describe user" view. The column "Null" in the "describe user" view above describes whether the respective column in the "user" table is allowed to have NULL as a value. It says nothing about whether we're nulling it on the cloud replicas for privacy reasons. You cannot see that from the "describe user" view at all.
I'm not sure what that sentence means ("the Null field" is a bit ambiguous). If you're refering to the integer "0" for user_id, yes, it works the same way (if no user id is specified, 0 will be used).
20:18 <Hauskatze> jynus: when I use "describe user" on metawiki_p, the user_editcount field says it's NULLed, but in fact it's not, and should not 20:19 <Hauskatze> https://phabricator.wikimedia.org/P7068
Right, I already wondered whether we need them or they can be removed. I pushed that idea back because I don't want to mix it with the commit getting rid of the wildcard vhost (to have smaller steps, one removing the wildcard vhost and replacing it with redirects, the other to remove the unwanted redirects). I'll upload another patch in the correct relationship when I find some time.
What's the correct closed status for "Whoever fixed this did so inadvertently or did not know about this task."?
All cleanup done :-)
Currently unable to commit time for this.
Assigning to joe - it seems you're the one most comfortable (or only one comfortable?) on apache changes. Also per the previous -2 on the patch, so it's blocked on you anyway.
both debian versions, though the stretch server isn't there anymore afaik
To the opposite, all app servers in deployment-prep have been replaced with stretch via T192071 (although you're right that the specific instance that made problems is gone). jfyi
Just judging from the task title, this and T183245: Ensure replica DB in labs is read-only look like being duplicates?
In T192473#4163154, aaron wrote:
I'm not so familiar with the Kafka system (only the basic concept).
Once upon a time, appservers would insert jobs into a database table and jobrunner servers would read 'em from there.
Per the answer on the discourse discussion, see https://secure.phabricator.com/T10448#186240 for why upstream probably won't move $whatever (here: task creation) into it's own notification setting, and https://secure.phabricator.com/T13069 for the preferred approach. I agree with their interpretation that this is just part of a more general problem and it seems the modular solution proposed would be better suited to solve that general problem, allowing very fine-grained notification control using the new system of mail stamps.
As the downside of that, this is not a trivial thing to fix but might take a while until we see the current system being replaced by the new one.
Indeed, the jobqueue on beta is still broken, although it's unrelated to the logspam.
Thanks to Antoine, I now know that jobs are queued in kafka. Fiddling a bit on deployment-kafka04 gives me the impression that all the jobs made it there and are sitting happily in the queue. Somehow they don't get consumed. By the description of the profile::cpjobqueue I think deployment-cpjobqueue consumes the messages from kafka and triggers the jobs on deployment-jobrunner03. There's some errors on deployment-cpjobqueue in logstash, I might take a look on those tomorrow - or rather, later today in my TZ.