Things my team is working on: MediaWiki-Platform-Team
Side projects I am working on (or planning to, eventually): User-Tgr
You can find more info about me on my user page.
User Details
- User Since
- Sep 19 2014, 4:55 PM (603 w, 1 d)
- Availability
- Busy Busy until May 10.
- IRC Nick
- tgr
- LDAP User
- Gergő Tisza
- MediaWiki User
- Tgr (WMF) [ Global Accounts ]
Wed, Mar 25
We use toolforge/extjsonuploader. The UA policy isn't very clear about it but it requires a contact email or URL.
Wed, Mar 18
That and maybe type=createaccount.
Sun, Mar 15
With T348388: SUL3: Use a dedicated domain for login and account creation we have (at least for Wikimedia wikis) given up the ability to embed identity checks right into the forms which perform dangerous actions. This was probably a net security benefit, since the auth domain has no on-wiki scripts and much reduced functionality, and so authentication (including reauthentication) is much safer against XSS attacks (and consequently also against phishing attacks once WebAuthn becomes widespread and ensures authentication only works on the right domain). Redirect-based identity verification workflows are a good fit for a separate auth domain; and we probably don't want to evolve security in two different directions for Wikimedia and non-Wikimedia wikis.
See also T420150: Remove SUL2 B/C API behavior.
At a minimum, we should require verification when editing the user JS of a user with high privileges. (Related: T197087: Remove or limit ability to edit the user JS of another user who has higher privileges)
Or maybe it could be worked into cost-based rate limiting somehow? Envoy could let requests with meta=tokens in them through, and then MediaWiki would somehow indicate whether it was a "pure" token request, and Envoy could block it on the way back if not.
Yeah dealing with action API URLs will be a huge pain. You'd need to disallow all other meta/prop/list/generator parameter values, plus the export parameter at least.
I wonder if we could auto-add people to Trusted-Contributors based on some modest contribution criteria (like >1000 global wiki edits)?
Fri, Mar 13
This is the snippet that sets ResourceServer::$user:
$userId = $request->getAttribute( 'oauth_user_id', 0 ); if ( !$userId ) { // Set anon user when no user id is present in the AT (machine grant) $this->user = User::newFromId( 0 ); return; }
Logstash: https://beta-logs.wmcloud.org/goto/fe9ab61fa8f33f764110446f85a1bcf3
Has a bunch of Couldn't connect to server on the objectcache channel, so pretty sure it's an infra issue.
I don't think api.wikimedia.org has any authenticated endpoints?
(Also that broke getting an access token with client credentials, and this probably broke using an access token that was obtained with client credentials, so maybe one causes bots to retry more aggressively than the other.)
Works now on closed wikis. @sbassett I think the task can be made public.
Yeah, probably caused by rEOAUa750632f6b0e: Set 'sub' JWT field in client credentials access tokens.
Mar 12 2026
Thanks!
Is the fix easy to backport? It's nice not to have CI breaks in live production branches.
IIUIC this is supposed to be fixed by rMWc47f1b756d46: Generate local PHPUnit config before preparing parallel runs, but the error is still happening:
15:01:26 Error in bootstrap script: RuntimeException:
15:01:26 The PHPUnit config override does not appear to be auto-generated. Generate it manually by running `composer phpunit:config`, or automatically by running tests via `composer phpunit`.
15:01:26 #0 /workspace/src/vendor/phpunit/phpunit/src/Util/FileLoader.php(66): include_once()
15:01:26 #1 /workspace/src/vendor/phpunit/phpunit/src/Util/FileLoader.php(49): PHPUnit\Util\FileLoader::load()
15:01:26 #2 /workspace/src/vendor/phpunit/phpunit/src/TextUI/Command.php(567): PHPUnit\Util\FileLoader::checkAndLoad()
15:01:26 #3 /workspace/src/vendor/phpunit/phpunit/src/TextUI/Command.php(347): PHPUnit\TextUI\Command->handleBootstrap()
15:01:26 #4 /workspace/src/vendor/phpunit/phpunit/src/TextUI/Command.php(114): PHPUnit\TextUI\Command->handleArguments()
15:01:26 #5 /workspace/src/vendor/phpunit/phpunit/src/TextUI/Command.php(99): PHPUnit\TextUI\Command->run()
15:01:26 #6 /workspace/src/vendor/phpunit/phpunit/phpunit(107): PHPUnit\TextUI\Command::main()
15:01:26 #7 /workspace/src/vendor/bin/phpunit(122): include('...')
15:01:26 #8 {main}
15:01:26 Script phpunit handling the phpunit event returned with error code 1
15:01:26 Script @phpunit was called via phpunit:entrypoint
15:01:26 Worker exited with status 1I believe this is caused by rECAU41c5a166ccbc: SUL3: Allow viewing Special:CreateAccount?returnto=… while logged in using sul3- prefixed URL parameters for returnto etc. when locally redirecting from Special:CreateAccount to Special:Userlogin (since the local domain sees SUL3 account creations as logins). The getPreservedParams() call in AuthManagerSpecialPage::performAuthenticationStep() is not picking up these parameters anymore, so they don't go into the returnUrl parameter of AuthManager, so they won't be in the local-domain URL the auth domain redirects back to after successful signup. About 10% of the time the tokenstore fails, returnUrl is not passed successfully to the central domain, and CentralAuth uses a different mechanism to generate the return URL, which is why this bug is only happening about 90% of the time. (Although it happened 0 out of 3 tries for me in production, but maybe I was just really unlucky?)
We shouldn't sample error messages, they are infrequent enough and not really an auth event in the first place, we should just use the authentication channel for that. But yes, the debug log file should include sampled events. The log seems normal otherwise. Central autologin attempt -> sending the user to signup page -> on the local domain, treating the successful signup as a login -> doing edge login on all the various top-level Wikimedia domains is the normal flow of things.
Mar 11 2026
...you probably shouldn't, we send that to authevents which is sampled. We probably should not be doing that.
Nevermind, I'm misremembering how this works. On the way back these parameters should be in the URL, not in the token store. On the way forward, they are in the token store though (in returnUrl) and that's less reliable. If you are doing this with WikimediaDebug / you are keeping track of your request ID, you should be seeing a Retrying local authentication message for the auth.wikimedia.org POST request if it's indeed a tokenstore issue.
Thanks. That looks correct, aside from the loss of return parameters. Maybe the token store lost the data on the way back? That's generally the more reliable direction though, as the data is read back within a few hundred milliseconds of it being written.
One thing that could be improved: currently the script checks Title::isSiteJsConfigPage() but not $wgRawHtmlMessages (compare with PermissionManager::checkSiteConfigPermissions()).
T419747: Possible hardware issues on wikikube-worker2332.codfw.wmnet matches the timing.
T418507: Move wmfGetPrivilegedGroups(), $wmgPrivilegedGroups, $wmgPrivilegedGlobalGroups, GetSecurityLogContext and PasswordPoliciesForUser hook handlers to WikimediaCustomizations would create a PrivilegedGroups component in WikimediaCustomizations, that might be a good (temporary) place for the code.
I couldn't reproduce - returntoquery handling seems to work for me as intended. (I haven't experimented much, but it doesn't seem like something that's happening 90% of the time.)
Do you have an example redirect chain?
Note that in the future what we'd like to recommend for bots is OAuth 2 client credentials (so rather than storing an access token in the configuration, you store the client ID and secret, and then the bot can use some standard OAuth library to fetch a new access token every few hours). It's not actually supported yet, though.
Mar 10 2026
(ClosedWikiProvider is in mediawiki-config, we don't really have a Phab tag for that. Should be moved to WikimediaCustomizations one day.)
Mar 9 2026
Thanks for investigating!
T416637: quibble-apitests failing on unrelated patches is probably also a case of this (only with user ID 1 rather than page ID 1).
Removed the private code (commit ID: e61fc28efe7a5cd5ca3ed9c52c17fd8a947f62f4), tested in production, works as expected.
(Also, why is it only being checked during the gate pipeline, not during normal tests?)
Not sure I'd bother with the README which we never changed from upstream's version, but I updated the "About" section at https://github.com/wikimedia/oauth2-server.
Seems fixed.
The task as stated in the description is done but want to 1) refactor the code so it can be reused for T418608: Add label for session type to API metrics, 2) maybe add some information about the consumer to webreqeust.
Do we want to do the same thing for the authentication dashboard?
Worked:
See also T419273: Limit the forwarding actions for Special:Random although you'd need a wiki to be extremely tiny for that to be a useful attack vector.
Yeah the xxh family are usually described as non-cryptographic hashes.
While we are making reauthentication more explicitly understood by AuthManager, we should also improve how it is logged.
Mar 8 2026
Rough plan:
- Split the production repo
- Create two new directories, private/PrivateSettings and private/PrivateLogic (names subject to bikeshed; also, maybe they should be one level higher, although then I think scap would have to be adjusted).
- Clone the git repo into those two directories.
- Edit so the PrivateSettings one only retains PrivateSettings.php without the hooks in it, and PrivateLogic retains everything but that. Update the readme files, etc.
- Drop the original git repo and its contents, only retain PrivateSettings.php and it only contains two requires, for PrivateSettings/PrivateSettings.php and whatever the new entry point is in PrivateLogic. Also keep the the readme file, and replace its contents to explain what's going on.
- Update https://gerrit.wikimedia.org/r/plugins/gitiles/operations/mediawiki-config/+/master/private/readme.php in mediawiki-config, document the changes in the production version
- Update callers (probably just this one in mediawiki-config?) to require the two new entry points instead of the old one
- Update deployment-charts, not really sure what needs to be done here (here it seems to claim it's not used anymore)
- Update the remaining code references (which are just informational) and public documentation (here and probably a handful of other places)
Let's look at this again. With reauthentication required for JS edits, this might be more pressing (because it would let us exempt OAuth from reauth).
Somewhat related: T210909: Introduce secure mode to MediaWiki (which proposes disallowing CORS entirely while in secure mode)
Probably don't want this on-by-default for security reasons, and it doesn't make sense at all to show it during reauthentication (but there's a separate task for that).
I think we should do this. Cross-site JS edits are the obvious means of escalating an XSS attack from one wiki to another, and I doubt there's any legitimate use for them.
These days you need editsitejs to edit raw HTML messages, so this is just a matter of permission management (editsitejs/edituserjs/editmyuserjs).
I was pretty sure we had an older task about importScript revision pinning, but can't find it.
Anything truly dangerous (colliding with a specific, non-attacker-controlled account) would require a preimage attack, which is not feasible against md5. So +1 to doing this via gerrit.
Mar 7 2026
Probably unrelated unless you saw the specific error message about cookies? Someone must have made a bunch of failed logins from your IP.
Yeah that's {T207557}. We should definitely fix that one; other than having to be cautious about SUL3, it seems straightforward.
Mar 6 2026
Github, Packagist, not sure if there's anything else.
On reflection, I think this task is not useful as it is, editing your own JS and another user's JS are very different things with very different risks, and should be discussed separately.
Another use case that came up is adding a query parameter to returntoquery (but otherwise not doing changes that could be incompatible with other handlers' intentions) and expecting that to show up on the next non-redirect response.
Filed T419229: Periodic job alerts could use some more information on what to do about making this clearer.
On an aside, it would be nice if the @phaultfinder user's profile description contained instructions on how to file tasks about it.
...but the alert cannot, the job needs to be deleted manually.
tgr@deploy2002:~$ kube-env mw-cron codfw tgr@deploy2002:~$ KUBECONFIG=/etc/kubernetes/mw-cron-deploy-codfw.config tgr@deploy2002:~$ kubectl get jobs -l team=mediawiki-platform,cronjob=purge-temporary-accounts --field-selector status.successful=0 NAME STATUS COMPLETIONS DURATION AGE purge-temporary-accounts-29545347 Failed 0/1 21h 21h tgr@deploy2002:~$ kubectl delete job purge-temporary-accounts-29545347 job.batch "purge-temporary-accounts-29545347" deleted

