- User-brennen is my personal workboard
- brennen-scratchpad is an Etherpad for miscellaneous notes
- I work on the Wikimedia Release Engineering Team
- Release-Engineering-Team is our team workboard
- RelEngTeam-Weekly (etherpad)
- Projects I work on:
User Details
- User Since
- Feb 3 2019, 8:29 PM (354 w, 11 h)
- Roles
- Administrator
- Availability
- Available
- IRC Nick
- brennen
- LDAP User
- Brennen Bearnes
- MediaWiki User
- BBearnes (WMF) [ Global Accounts ]
Fri, Nov 14
Done:
Thu, Nov 13
See T385529: Automatically export and publish a list of WMF deployed code repositories in Bitergia's JSON format for some related background.
Wed, Nov 12
AFAICT, just the 2 minor ones in the phabricator repo. It's up for testing in devtools.
Wed, Nov 5
Tue, Nov 4
Fri, Oct 31
PHPUNIT_PARALLEL_GROUP_COUNT=cpus / (executers / 2) (gives us 8 for most instance, 16 for our bigger instance)
Wed, Oct 29
Tue, Oct 28
Mon, Oct 27
Oct 14 2025
As I understand it: gitlab.api_key in the local.json created by each scap deployment is derived from the phabricator.local.gitlab_api_key set in puppet (from a lookup of profile::phabricator::main::gitlab_api_key) and written to /etc/phabricator/config.yaml.
Oct 7 2025
Oct 2 2025
Oct 1 2025
Online and can take over train ops from here.
Going ahead and resolving this, please reopen if anything breaks with those jobs.
Sep 30 2025
After updating analytics-refinery-maven-release, this one succeeded: https://integration.wikimedia.org/ci/view/All/job/analytics-refinery-maven-release/48/console
@brennen: I assume you don't know or remember anything about this, just like me. :)
Sep 26 2025
If editing /etc/phabricator/config.yaml and restarting doesn't change things about the "Local Config" then what else needs to happen for that to sync to the Local Config? (I assume a proper deploy)
Sep 25 2025
Currently stable on all wikis.
Haven't seen this in prod since rolling out backport.
Blocker resolved, logs clean-ish, rolling to all wikis.
Planning to deploy backports prior to rolling train; currently waiting on all-clear for deployment server DC switchover (T399891).
Raising to UBN as train blocker. Can handle backport shortly.
Sep 24 2025
Sep 23 2025
Planning a deploy in the upcoming window.
Planning a deploy in the upcoming window.
Sep 16 2025
Sep 15 2025
Marked this private, assuming it's for high-level data and nothing remotely at the level of PII, I think it fits the rubric. Let us know if you run into any issues.
Sep 12 2025
Sep 10 2025
Will need some investigation. May be Phorge upstream (Phabricator (Upstream)).
Sep 9 2025
Sep 2 2025
Aug 28 2025
There's followup work to be done in improving the setup checks upstream and digging into repo caching, but the proximate problem seems resolved after tuning the APCu cache.
Aug 26 2025
Aug 25 2025
It still shows 128MB there. Let me do some restarts to change that.
Aug 20 2025
Aug 19 2025
Aug 13 2025
Aug 7 2025
Thanks for investigating! usage_ping change LGTM.
Aug 6 2025
Aug 5 2025
With the above mitigation in place, this seems less urgent in terms of web requests, but I'd still like to get to the bottom of it.
Question: Do we still have data if the number of duck-sound=quack requests was roughly the same level before and after our 2025-07-15 15:30UTC deployment in T370266? (If it was the same, that would rule out f2a01dca392.)
Aug 4 2025
BUT, actually running a phpinfo() shows:
Puppet 7 servers never fail to surprise me with how much RAM they want. I'd definitely start by doubling the RAM on that VM before investigating anything else.
After reboot:
Unable to ssh to puppetserver. In horizon log tab for integration-puppetserver-01:
Aug 1 2025
Jul 31 2025
Stable on all wikis.