Private account of @Lucas_Werkmeister_WMDE (he/him, Berlin timezone). Anything I do here is on volunteer time, even if it looks work-related :)
User Details
- User Since
- Jun 5 2016, 4:36 PM (514 w, 5 d)
- Availability
- Available
- IRC Nick
- lucaswerkmeister
- LDAP User
- Lucas Werkmeister
- MediaWiki User
- Lucas Werkmeister [ Global Accounts ]
Wed, Apr 15
This really sounds like an ask for deployment tool functionality (something like scap3) in a tool that is a service control instead
Tue, Apr 14
First pass (untyped unrecursive, typed unrecursive, untyped recursive, typed recursive):
/tmp/perf.php:90:
array(2) {
[0] =>
int(67124648264680)
[1] =>
double(0.3472859859466553)
}
/tmp/perf.php:91:
array(2) {
[0] =>
int(67124648264680)
[1] =>
double(0.3502838611602783)
}
/tmp/perf.php:92:
array(2) {
[0] =>
int(67124648264680)
[1] =>
double(1.1621921062469482)
}
/tmp/perf.php:93:
array(2) {
[0] =>
int(67124648264680)
[1] =>
double(1.1699590682983398)
}
Second pass (untyped unrecursive, typed unrecursive, untyped recursive, typed recursive):
/tmp/perf.php:96:
array(2) {
[0] =>
int(67124648264680)
[1] =>
double(0.3465700149536133)
}
/tmp/perf.php:97:
array(2) {
[0] =>
int(67124648264680)
[1] =>
double(0.3486061096191406)
}
/tmp/perf.php:98:
array(2) {
[0] =>
int(67124648264680)
[1] =>
double(1.151196002960205)
}
/tmp/perf.php:99:
array(2) {
[0] =>
int(67124648264680)
[1] =>
double(1.1774170398712158)
}Wed, Apr 8
Seems to be fixed now, thanks!
Tue, Apr 7
Could be, yeah. I guess we could add a first version that only supports application/x-www-form-urlencoded, and that should support the majority of use cases (all of them?), as long as users don’t accidentally send multipart/form-data instead.
Mon, Apr 6
Sounds reasonable to me, thanks. (If I hear any other complaints from ACDC users I’ll report it here and/or there as seems appropriate.)
Sounds good, thanks. (FWIW, I haven’t heard any new complaints similar to T421161, but I have very limited visibility here – the badtoken API errors in Grafana are probably the best data we have.)
Now it’s working without Scribunto, thanks @Fomafix!
I don’t see how that’s a reason to close the task? I don’t really care whether you think it’s related to the DC switchover or not, I care about the issue getting fixed. And to me the Grafana graph linked in the task description, looking at badtoken errors in the last 30 days, doesn’t really look like a return to normal yet (though the logarithmic scale makes it hard to interpret, and AFAICT my volunteer account doesn’t have sufficient privileges to preview an edited version with a linear scale):
Now (following T419034#11788469) it works! This edit was made with just the client ID (no client secret) and successfully refreshed the access token before making the edit. (The 401 response even had CORS headers, including access-control-expose-headers: Retry-After,WWW-Authenticate, so that this worked fully in the browser.)
Sun, Apr 5
I’ve now updated m3api-oauth2 to handle non-MediaWiki HTTP-level errors with WWW-Authenticate response headers, in m3api v1.1.0 + m3api-oauth2 v1.0.4 (also v1.0.5). As far as I can tell, this was successful – this edit automatically refreshed the access token (I added some console.log()s in node_modules/ to make it visible).
Fri, Apr 3
Still happening, still causing accidental edits: https://meta.wikimedia.org/w/index.php?title=Talk:QuickCategories&diff=prev&oldid=30348897
This change also broke Tool-quickcategories:
Tue, Mar 31
I don’t know how to answer that question beyond the IRC log link that’s already in the task description… there was a suspicious increase in errors right around the time of the DC switch, and the switchover announcement said to file a task for any issues, so I did.
(This task isn’t related to confidential vs. non-confidential or browser-side vs. non-CORS-limited clients though, it affects everyone as far as I’m aware.)
Sun, Mar 29
Seems to work on Beta and individual consumers) \o/
Sat, Mar 28
Fri, Mar 27
Hm, apparently this is… intentional? Add handler for /?url=... testing, from 2019, does what it says on the tin.
Wed, Mar 25
ACDC is a Commons gadget. (It doesn’t have a Phabricator tag yet, though maybe I should apply one. Not sure yet.)
Tue, Mar 24
This might be caused by a general issue due to the datacenter switchover: T421168: Session store issues causing badtoken errors, session failures, logouts (late March–April 2026)
I don’t know what’s going on here but it seems worth investigating, so filing the task for visibility :)
ACDC uses mw.Api.postWithEditToken:
Mon, Mar 23
Mar 17 2026
Should be fixed now, thanks for the report! The perils of having tools translated into more than 50 languages :)
Mar 14 2026
The Moroccan Arabic templates were merged and deployed ca. ten minutes ago :)
Mar 9 2026
I would probably go for the history rewrite – mainly for those cache directories which otherwise bloat the history forever. (Although the toolforge build service clones with --depth=1 if I’m not mistaken, so I guess it wouldn’t affect that anymore. But it would still affect any developer who clones the repo.) And at that point you might as well drop the credentials too. But if you want to keep the history, I think it would probably be fine to do so at this point, now that the database credentials have been refreshed and the kubernetes certificates will expire soon; you should just wait a few more days, if I’m not mistaken:
Great, thanks! I guess this means the k8s certs were already rotated automatically?
(Tagging Toolforge in the hope of finding someone who can do the credential rotation. Last I checked, I think I didn’t have enough access to do it myself – I have root on the Toolforge servers, but I don’t have cloudcontrolXXX access for to regenerate replica.my.cnf. I might be able to regenerate kubernetes credentials for tools but I’m not sure I want to risk that without being familiar with the process, tbh.)
It might be okay to make the task public, but I think it needs to stay open – IIUC, a Toolforge admin (or Cloud VPS admin?) needs to rotate the credentials (k8s certificate, ToolsDB password), since we don’t know who else might have accessed them.
Mar 7 2026
Mar 4 2026
Side note: I tried to see if there’s a standard way to signal expired access tokens in a response, but it doesn’t sound like it; RFC 6749 intentionally omits this:
Probably not much, though we should at least remove this sentence once we’ve confirmed T323855 is fixed.
The error is indeed unrelated and affects both examples: ⇒ T419034: Custom OAuth 2 error from Wikimedia infrastructure breaks automatic retry of requests
Mar 3 2026
Well, the refresh doesn’t quite work properly, though I’m not sure it’s due to this issue. My web app makes a meta=tokens request, and the API responds with {"httpCode":401,"httpReason":"Jwt is expired"}, but the response is missing Access-Control-Allow-Origin, so my code crashes before it can even try a refresh.
Seems to work! I made this edit with the following local diff to the webapp-clientside-vite-guestbook m3api example:
Mar 2 2026
I guess T418580: Deploy 2FA requirement using $wgRestrictedGroups to Wikimedia production, instead of OATHAuth's custom config (tech news) will effectively resolve this task, removing the confusion by tying actual group membership to 2FA?
Feb 25 2026
I’m guessing the Node buildpack is itself outdated?
Feb 23 2026
Well, I just tried sending an email to myself via Special:EmailUser and it worked fine. So from my side this seems to have worked :)
Feb 19 2026
Feb 18 2026
Boldly making this a train blocker (⇒ UBN!) for now; I don’t know how widely OAuth 2 is used compared to OAuth 1.0a (which doesn’t seem to be affected), but given that several users noticed the issue already (#wikimedia-cloud) I think it’s reasonable to guess that this is causing some breakage.
Feb 16 2026
Feb 15 2026
Feb 12 2026
haven’t compared their histories yet, in part because Git’s “dubious ownership” security mitigation is annoying
Feb 9 2026
There are a bunch of separate Git repositories in FNBot/ (along with more directories that aren’t Git repos and which will just go into the main request-legacycode repo):
I don’t remember anything ^^ but T389540#10774116 is probably most of what I found at the time (sadly with less “methodology” than I’d like now).
Feb 6 2026
Thanks! I can try to help out later, but not sure I’ll be able to get to it this weekend (depends on how smoothly a certain MediaWiki upgrade will go ^^).
Jan 30 2026
I’ve encountered the same restriction in this job (though admittedly I was just trying to confirm the Cloud VPS IP space, so it’s not like this is blocking any work from me at the moment).
Jan 29 2026
(Acknowledged, but I just want to point out how preposterous it is that letting just one cloud provider access a notionally public website can be considered “unreasonable” in this day and age. Fuck scrapers :( )
Jan 27 2026
Yup, thank you! Committed to the main repo here: https://gitlab.wikimedia.org/repos/m3api/m3api-oauth2/-/commit/31c2577983
Jan 24 2026
I guess the way to test this would be to deploy the new build, test it in the matrix-telegram-test gateway, and be ready to quickly restore the previous image if it doesn’t work out? (Or use #wikimedia-cloud and let the people in there just deal with the test messages.)
Jan 23 2026
Jan 22 2026
Jan 21 2026
(Ideally this would’ve been reported as a non-public Security task, but now that there’s a public merge request detailing the issue I’m not sure if it’s useful to still security-protect the issue.)
Previously: T286415: XSS in ISA tool
Jan 18 2026
FWIW, I’ve tried to get m3api-oauth2 CI running on the WMCS runners instead (wmcs tag), but so far haven’t managed to get Chrome/Chromium running there yet (latest job).
Jan 17 2026
According to this screenshot the IP address of that particular job runner was 159.203.90.138. Which is apparently in 159.203.0.0/16, somewhere in DigitalOcean, LLC.
Jan 14 2026
Just to mention it here – the group mechanism seems to work like a charm, I just noticed that the new m3api-rest repo got picked up without any issue \o/
Dec 23 2025
Dec 17 2025
AFAICT this is still happening. @Ladsgroup updated wdvd to PHP 8.4, but webservice status is reporting PHP 7.3, probably because that’s what in the public_html/service.template file.
