Page MenuHomePhabricator

Editing user JS/CSS pages of another user should require elevated security
Open, Needs TriagePublic

Description

Like T197137: Editing sitewide JS/CSS pages should require elevated security but for editing other users' JS/CSS (possible for interface admins).

Event Timeline

Restricted Application added a subscriber: Aklapper. · View Herald Transcript

I came here to open a similar ticket, but since this already exists, I'll just put my comments here.

I totally agree with the concept here, but it really should be generalized to any advanced permissions. I am an admin, and a checkuser on enwiki, and an ombud. It would not be hard for somebody to trick me into running some rogue script as happened in today's incident. If that were to happen, my account could do a lot of damage before I realized what was going on.

The question asked here is completely on target. When I exercise my advanced permissions, it really should require that I re-authenticate, just like I have to do with sudo on a server command line. I do have an alternate non-privileged account which I use on my phone and in public spaces, but switching back and forth is enough of a hassle that it is unreasonable to expect people to do that all the time. There should be a quick and easy way for an advanced permission holder to re-authenticate on any action which requires those permissions. It's not complete protection against what happened earlier today, but it would sure reduce the attack surface.

For this to work, re-authenticating to edit one's own user space should not count as re-authenticating to do other dangerous actions.

Otherwise, suppose I add a malicious script to my own common.js Well, in order to do that, I'd have to re-authenticate, giving the script just what it wants: full access to ''everything'' (including password changes!) for the next N minutes until the authentication times out.

it really should be generalized to any advanced permissions.

See parent task.

For this to work, re-authenticating to edit one's own user space should not count as re-authenticating to do other dangerous actions.

T208667: Tie reauthentication (login with elevated security) to a specific security level

I'm using my Wikiploy a lot for deploying scripts. This uses a BotPassword, which is basically a token with limited capabilities. I think that should still be allowed, or there should be some other mechanism for CI/CD-like deployments.

Perhaps 2FA should only be required for editing scripts of other users.

Slippery slope that would be too annoying to deal with if not implemented properly. At most something like this can require 2FA once a day, otherwise, most users will just curse whoever came up with this change. In an ideal scenario, this should probably be even simpler than 2FA (something like bawolff's suggestion to have to confirm the action on a separate website). Editing other users' JS should require the same as editing site-wide JS, though.

Perhaps 2FA should only be required for editing scripts of other users.

What is the concern in this task is if someone add something malicious to site common.js it can not edit any of your .js page. This means attackers can not spread attacks to user's common.js (so to continue to be malicious when attack in site common.js is removed).
In fact, requiring elevated security largely (but not 100%) mitigate a further issue: if a malicious browser extension is installed - which can do vandalism on your behalf and even steal your session cookie (which can be mitigated by device-bound session), or even your session is stolen, they can not modify common.js without knowing your password (and 2FA token, if required).

Perhaps 2FA should only be required for editing scripts of other users.

What is the concern in this task is if someone add something malicious to site common.js it can not edit any of your .js page. This means attackers can not spread attacks to user's common.js (so to continue to be malicious when attack in site common.js is removed).
In fact, requiring elevated security largely (but not 100%) mitigate a further issue: if a malicious browser extension is installed - which can do vandalism on your behalf and even steal your session cookie (which can be mitigated by device-bound session), or even your session is stolen, they can not modify common.js without knowing your password (and 2FA token, if required).

OK, I'm going to say it... What you are saying is not really what happened yesterday, is it? The user willingly added a script to their own JS, and then all hell broke loose. That was the main problem – adding a script from a random user. He would have probably added the script even if there were a 2FA step he would have to complete, because he did want to add that script. I mean sh't happens, I don't want to be hard on the guy, but he did what he did with a super user account and that should not have happened. That might be a problem with missing procedures or with training. (edit: I mean this is not a software issue really)

And if someone changes MediaWiki:Common.js with a malicious script? Well, WebSockets are blocked by CSP, so it's not that bad. Still, you can do anything AutoWikiBrowser can do and more. You can spread the load of the attack across all users currently connected (attacking from all their IPs). I don't think editing some user script is the most important threat here if MediaWiki:Common.js is compromised. So to me, T197137 is a much more important task to do.

Yes, all in all it is unfortunate that a mistake on the part of the WMF is being used to push half-baked solutions that would not have prevented the issue of sticking a fork into various electrical sockets until one backfires. The handling of the incident was subpar to say the least. I can only hope that the re-login 'solution' would be amended swiftly to be less intrusive.

I second @Nux ’s messages, requiring 2FA to make any edit to JS/CSS pages would instantly kill any external deployment tools for gadgets. I maintain complex tools on frwiktionary from external git repos I control, with custom CI/CD pipelines and scripts. Requiring 2FA would make it too difficult/annoying (if not impossible) to maintain these tools properly. And as @Nux said, it would not prevent a similar accident from happening in the future.

I second @Nux ’s messages, requiring 2FA to make any edit to JS/CSS pages would instantly kill any external deployment tools for gadgets. I maintain complex tools on frwiktionary from external git repos I control, with custom CI/CD pipelines and scripts. Requiring 2FA would make it too difficult/annoying (if not impossible) to maintain these tools properly. And as @Nux said, it would not prevent a similar accident from happening in the future.

Anyway, the future way to manage Gadgets would be Produnto.

Perhaps 2FA should only be required for editing scripts of other users.

What is the concern in this task is if someone add something malicious to site common.js it can not edit any of your .js page. This means attackers can not spread attacks to user's common.js (so to continue to be malicious when attack in site common.js is removed).
In fact, requiring elevated security largely (but not 100%) mitigate a further issue: if a malicious browser extension is installed - which can do vandalism on your behalf and even steal your session cookie (which can be mitigated by device-bound session), or even your session is stolen, they can not modify common.js without knowing your password (and 2FA token, if required).

OK, I'm going to say it... What you are saying is not really what happened yesterday, is it?

Well, it happened. While first addition of malicious script to common.js was an actual human decision, the script then replicated itself and went to other people's common.js, who then unconsciously edited MediaWiki:Common.js to put it back there, in case it was reverted. So, yes – additional reauth for editing JS wouldn't have prevented the original bad things from happening, but would make it much easier to remove the malicious code from wiki, without the need to make it readonly.

So, yes – additional reauth for editing JS wouldn't have prevented the original bad things from happening, but would make it much easier to remove the malicious code from wiki, without the need to make it readonly.

There was never a need to make the wiki, and especially all wikis, read-only. That only happened because there is no protocol for these types of issues despite them already being very known. What should've happened was some account going into safe mode and dealing with the consequences. That's it. Doing site-wide read-only for hours was just another mistake by the WMF yesterday.

So, yes – additional reauth for editing JS wouldn't have prevented the original bad things from happening, but would make it much easier to remove the malicious code from wiki, without the need to make it readonly.

There was never a need to make the wiki, and especially all wikis, read-only. That only happened because there is no protocol for these types of issues despite them already being very known. What should've happened was some account going into safe mode and dealing with the consequences. That's it. Doing site-wide read-only for hours was just another mistake by the WMF yesterday.

Safemode only affects pages that are loaded after enabling it. People who loaded it earlier could still have a malicious script running in an open tab, which wouldn't be affected by the safe mode

That script was running synchronous code once per a page loaded, which anyone with basic JS knowledge should be able to tell. Having it in an open tab would've done nothing after it already ran once on that tab.

On reflection, I think this task is not useful as it is, editing your own JS and another user's JS are very different things with very different risks, and should be discussed separately.

Wrt bots, we can probably find a way to only apply restrictions to session mechanisms which are vulnerable to takeover via XSS.

There were up to 35 edits per minute, which is usually the max I get using a bot account. Should the action rate be limited somehow, or an increased edit rate requested from the software before any mass edits, and granted for a limited time?

There were up to 35 edits per minute, which is usually the max I get using a bot account. Should the action rate be limited somehow, or an increased edit rate requested from the software before any mass edits, and granted for a limited time?

Normal users are limited to 90 edits/minute. However, sysops, bots (except bots in Wikidata), account creators, global rollbackers, global sysops, stewards and staffs have no rate limit.

editing your own JS and another user's JS are very different things with very different risks

Note also many user scripts are hosted in user namespace so "own JS" may be used by other users.

Anyway, why my proposal includes own JS is the attack spreads malicious code to user common.js page, which takes much effort to clean up (and unnoticed users will continue vandalizing wiki even if site common.js is fixed).

I'm a global interface editor and regularly fix security issues across hundreds of wikis. Sometimes through automation (tourbot), but most often through the on-wiki editor in order to properly review, test, and verify things in-context because similar snippets are often adapted or used in subtly different ways.

I often encounter copies of snippets that we fixed years ago but are still in forks across wikis. Whenever I come across one I'd usually do a global search and work my way through the list. I encountered such an issue today but I've given up trying to mitigate it, because each wiki I'm asked again for password and 2FA, and then again on the second pass through (as more than an hour will have passed by then).

When I do this, I'm often in safemode via the forcesafemode preference (globally, or locally on that wiki). Or, when I use tourbot, it doesn't matter as I'm not visiting the wiki or executing any of its code.

I considered three ideas to improve this:

forcesafemode preference

At glance, allowing editing of site scripts without re-auth when the forcesafemode preference is enabled, makes sense. It is also easy to implement, and would help with web editing. It would also offer relief for semi-automatic bots, because I can program my bot to toggle this preference around sessions (even though the bot isn't affected anyway, and changing the preference will also impact my personal web sessions; this a workaround I can opt-in to and live with).

Note that the safemode=1 query string is not a good substitute, because it isn't preserved across navigations or form submissions.

But: a malicious user script may programmatically set this from client-side JavaScript prior to making an edit to a sitewide script, from within that same browser tab, thus defeating the purpose.

For this to be an effective relief without being self-defeating, you'd want some kind of time-delay lock (whereby the preference stores a timestamp instead of a boolean, and is only effective after a delay, and notifies the user of this on subsequent pageviews during that time), or bind it to a given login session (i.e. it will only apply the next time you login, not allowing a mid-session change), or simply require a re-auth.... which is what we did already, so let's focus on the other two ideas instead.

Exclude BotPasswords

Can we forego this restriction for sessions authenticated with a bot password?

  • The account must have the right to edit site scripts (i.e. interface admin, staff, etc.)
  • The account created a bot password in the past that explicitly grants that particular right to that BotPassword (not by default).
  • Bot password are only valid via the API, where client-side scripts don't run. Logins are denied if you use a bot password via the web UI.

As best as I can tell, this is safe. Thoughts?

Native wgReauthenticateTime in CentralAuth

As of yesterday, to fix an issue on 200 wikis I have to login and present 2FA 200 times. As described above, this means that effective immediately no staff or volunteers are able to mitigate security issues in site scripts or gadgets effectively, because doing so simply takes too much time.

  • Open edit page.
  • Redirected to auth.wm.o.
  • Unlock password manager.
  • Enter password.
  • Unlock phone.
  • Open 2FA app and copy 2FA token.
  • Repeat, the above will have timed out, reset, or gone to sleep/lock by then.

I see there is already a task for skipping the password step (T208668: Do not ask for password on reauthentication when 2FA is enabled).

I suggest we also centralize it such that this need not be repeated hundreds of times.

We may want to tie reauth mode to a given purpose first (T208667), because it is bad enough that one can escalate from a mild action to a sensitive action on the same wiki, we wouldn't want to e.g. unlock editing sitewide scripts (editsitejs) on en.wikipedia, after a reauth for changing your email (ChangeEmail) on nl.wiktionary.

Exclude BotPasswords

Can we forego this restriction for sessions authenticated with a bot password?

  • The account must have the right to edit site scripts (i.e. interface admin, staff, etc.)
  • The account created a bot password in the past that explicitly grants that particular right to that BotPassword (not by default).
  • Bot password are only valid via the API, where client-side scripts don't run. Logins are denied if you use a bot password via the web UI.

As best as I can tell, this is safe. Thoughts?

@Tgr also proposed this yesterday, and I think this makes sense and should be safe.

  • Open edit page.
  • Redirected to auth.wm.o.
  • Unlock password manager.
  • Enter password.
  • Unlock phone.
  • Open 2FA app and copy 2FA token.
  • Repeat, the above will have timed out, reset, or gone to sleep/lock by then.

If you set up a passkey (which you can already do today), this will go faster, because you won't need to unlock and use your phone, only your password manager.

I see there is already a task for skipping the password step (T208668: Do not ask for password on reauthentication when 2FA is enabled).

Also/alternatively, we also have passwordless login coming soon (T419198) which will allow one-step login (and therefore one-step reauth) if you have a passkey.

I suggest we also centralize it such that this need not be repeated hundreds of times.

I think it could make sense to have a cross-wiki reauth state, but we should think about the pros and cons some more. It might make sense not to do it if this kind of multi-wiki site script editing is done infrequently and/or if it could relatively easily be done through a BotPassword instead, because then the infrequent annoyance might be a reasonable price to pay for making it harder for malicious scripts to spread across wikis.

We may want to tie reauth mode to a given purpose first (T208667), because it is bad enough that one can escalate from a mild action to a sensitive action on the same wiki, we wouldn't want to e.g. unlock editing sitewide scripts (editsitejs) on en.wikipedia, after a reauth for changing your email (ChangeEmail) on nl.wiktionary.

Yes, we should definitely do that regardless.

I really don't like that now Wikipedia forces me to re-login completely. I just tried to edit a gadget and had to type in my password twice and enter a token twice. Perhaps the token was wrong, I don't know (it didn't say!). Entering just a token XOR just a password should be enough. Even in LastPass I'm only asked about my token once a month. I mean I'm grateful for making the case for my Wikiploy to be the only sane way to edit gadgets, but still, normal edits should not be that hard ;)

Normal users are limited to 90 edits/minute. However, sysops, bots (except bots in Wikidata), account creators, global rollbackers, global sysops, stewards and staffs have no rate limit.

Rarely should a regular user need to go above 10 edits per minute, and even that should be limited to some max number of edits. Same for admins, most of the time.

The thing is, as an interface administrator, any script I'm using has the interface admin rights 100% of the time. But I need that right less than 1% of the time. I wouldn't mind activating the right only when I need it (just like we do for pseudobots).
Same with some sysop rights. I don't block, nuke or massdelete that often, yet a bad actor whose script I'm using can change their script and immediately start blocking and deleting.

There are wikis where every sysop got interface admin rights, but have no understanding how it all works. It's surprising how rarely we see these kinds of attacks, TBH.

Rarely should a regular user need to go above 10 edits per minute

This is wrong for rollbacks. Rolling back mass inconsensus edits or bot errors requires making hundreds of edits with a rate more than a 1 edit / sec.

Rarely should a regular user need to go above 10 edits per minute

This is wrong for rollbacks. Rolling back mass inconsensus edits or bot errors requires making hundreds of edits with a rate more than a 1 edit / sec.

Yes, but you don't do it every day all the time. So you can go to your sysop console (to be made) and enable that superpower, for a limited time. Makes things way more secure, takes away 10 seconds of your time.
I mean, this "edit war" https://meta.wikimedia.org/w/index.php?title=MediaWiki%3ACommon.js&date-range-to=2026-03-05&tagfilter=&action=history was really hard to watch. And it only happened because the people who were not there to use their interface admin right had the right.

95% of rollbackers in such cases are not sysops. (5 years ago in ruwiki we needed to roll back hundreds of inconsensus edits made by local bureaucrat). So, this console should be added to any experienced user.

Just wanted to link to my comment in https://phabricator.wikimedia.org/T197160#11685043 for this thread as well, about how we've been thinking about re-auth -- which is definitely more streamlined than what people are currently experiencing with the quick patch we rolled out last week.

(tagging for visibility, as FWICS this proposal was made as a follow-up to the user javascript incident)

The account created a bot password in the past that explicitly grants that particular right to that BotPassword (not by default).

I suspect plenty of people just check all the boxes without reading them.
Could be manageable with better UI + asking current / newly elected interface editors to review their bot password permissions, though. And even without the grants mechanism, I think bot passwords would be fairly safe.
(We should probably do T208008: Consumer owner-only oauth proposals should require reauth and then extend this to OAuth as well.)

Bot password are only valid via the API, where client-side scripts don't run. Logins are denied if you use a bot password via the web UI.

You can log in with a bot password using Special:ApiSandbox. (I do this a lot for testing, although with very restrictive grants.) You'll then have a bot password session and a normal session in parallel (they use different cookie names), and the bot password will take precedence for API requests but be ignored for web requests. Hypothetically, if you encounter some malicious site/user JS in this situation, it would automatically use your bot password for making JS edits via the API.

Seems too fringe a situation to be worth worrying about, though.

I suggest we also centralize [reauthentication] such that this need not be repeated hundreds of times.

On one hand, I imagine this would be a big help for some workflows (e.g. cross-wiki regex replace for gadget maintenance, using some browser-based assisted editing tool). On the other hand, it would make privilege escalation across wikis easier. I think that's a significant practical concern - it's a lot easier to engineer an XSS on the Karekare Wikipedia, which has one admin and two active users, than on the English Wikipedia which has a thousand admin and three hundred thousand active users.

I think if we do this, it should be done in a way that needs to be triggered intentionally and is clearly differentiated from single-wiki reauthentication.

Tgr renamed this task from Editing user JS/CSS pages should require elevated security to Editing user JS/CSS pages of another user should require elevated security.Mar 8 2026, 5:41 PM
Tgr updated the task description. (Show Details)

On reflection, I think this task is not useful as it is, editing your own JS and another user's JS are very different things with very different risks, and should be discussed separately.

I've been bold and narrowed the task to be about the edituserjs permission (and not editmyuserjs), and filed T419347: Editing your own JS/CSS pages should maybe require elevated security.

(tagging for visibility, as FWICS this proposal was made as a follow-up to the user javascript incident)

I don't think this would have stopped or affected the user javascript incident though. Wasn't the incident a MediaWiki sitewide JS file, not another user's JS file? And wasn't it via the API, not the web browser?

For that specific incident, some way to disallow the API edit to MediaWiki:Common.js until approved in a prompt or Special:Preferences "allow your userscripts to edit MediaWiki:Common.js and similar files" or something is the first mitigation that comes to mind.

(tagging for visibility, as FWICS this proposal was made as a follow-up to the user javascript incident)

I don't think this would have stopped or affected the user javascript incident though. Wasn't the incident a MediaWiki sitewide JS file, not another user's JS file?

Yes, but it could have edited global.js of admins of Wikipedia e.g my global.js and that would have been much worse as it would spread to all projects. It would be good to avoid spreading too much. Within reason... as was said before in many cases staff and IA must edit user JS eg 1, sometimes a lot of them eg 2.

And wasn't it via the API, not the web browser?

Technically everything is an API call, just with various formats. You can also submit forms via JS. The only problem might be that you would need a token that is only served by specific request, but you can probably redirect fast enough that most users won't notice or won't know what happened (you could also use frames but those seems to be blocked now).

So I would agree that a reasonable block is indeed need. Not a fan of current (hopefully very temporary) impl for gadgets (which now doesn’t even allow to copy code). I think it might be enough if there would be a temporary session state which a user acquires just by confirming it on a page that does not load any scripts/gadgets. At the moment https://en.wikipedia.org/wiki/Special:Preferences doesn't load any script. It shouldn't be too hard to create https://en.wikipedia.org/wiki/Special:Confirm_High_Risk which could as simple as a form that sets a secure cookie (HTTPS and not accessible to JS). That cookie would be set for an hour or so and would be required to do what IA does as long as the session is browser based (so bots using botpassword session would still be able to do Ci/CD work).

At a minimum, we should require verification when editing the user JS of a user with high privileges. (Related: T197087: Remove or limit ability to edit the user JS of another user who has higher privileges)