Page MenuHomePhabricator

Third-party resources policy
Open, In Progress, MediumPublic

Description

Instructions

  1. Define the problem or opportunity (WHAT).
  2. Outline the importance of addressing the problem or opportunity (WHY).

WHAT?

In one sentence, what is the problem or opportunity?
There is no clear Wikimedia policy on the use of third-party resources, especially executable javascript loaded into Wikimedia websites. The absence of such a policy creates security and privacy risks for Wikimedia users, while exposing the Foundation to financial and reputational damage.

Note: The generic term “third-party resources” is purposely used here so as to be able to cover a scope larger than only javascript resources, if needed in the future.

What does the future look like if this is achieved?

  • Gadgets makers do not send user information to third parties
  • There is a clear policy, cautioning against loading executable javascript
  • Exceptionally, for gadgets that interact with third-parties, they have clear privacy notice
  • Gadget makers educate their peers regarding third-parties and reference the policy
  • WMF avoids reputational damage (example), and financial loss due to privacy violations

What happens if we do nothing?

  • There is continued confusion about the handling of third-party resources in Wikimedia projects (eg: T230124).
  • Users face real-life safety consequences because ill-intended third parties stood between their data and the Wikimedia platform
  • Unmitigated security and privacy risks related to third-party resources are exploited, leading to violation of user's privacy and platform integrity
  • Foundation’s reputation is damaged if a user’s privacy or security is compromised as a result of its platform not policing the use of third party resources.

WHY?

Identify the value(s) this problem/opportunity provides. Add links to relevant OKRs.
Rank values in order of importance and be explicit about who this benefits and where the value is.

User Value/Organization Value AND Objective it supports and How

User Value/Organization ValueObjective it supports and How
Users’ privacy is shielded from external parties, rather than their data being shared without them even knowing it (eg: T275754)Thriving Movement, especially regarding Safe and Secure Spaces (T-O13-D1)
Gadget makers and developers are educated and empowered to mitigate privacy risks using the policyPlatform Evolution, especially allowing for the mitigation of risks for both development teams and operational stakeholders, building trust in our development processes (KR3)
Legal and Security staff do less reviews of gadgets loading third-party resources, since the community enforces the policy upstreamThriving Foundation - Technical Infrastructure, in particular around decrease in consumption of operational service (Resilience’s KR3).

Why are you bringing this decision to the Technical Forum?
What about the scope of this problem led you and your team to seek input across departments/organizations?

  • The use of third-party resources impacts thousands of users across Wikimedia projects (Cf. T275754, T65598).
  • Any change to it will involve collaborating with various stakeholders, both within the Foundation and outside.
  • This issue needs broader visibility so as to gather valuable feedback

Event Timeline

Hi @sguebo_WMF I had/have exactly the same problem/opportunity going through the technical decision forum (T262493) with a slightly broader scope (as well as security, privacy, I'm also concerned about code quality, and establishing norms between editors and WMF staff).

Following T262493#7584789 I've begun drafting a policy and collating feedback on the talk page for it:
https://www.mediawiki.org/wiki/User:Jdlrobson/Extension:Gadget/Policy

Perhaps we could combine efforts here?

sguebo_WMF triaged this task as Medium priority.Feb 16 2022, 1:04 PM
sguebo_WMF updated the task description. (Show Details)
sguebo_WMF moved this task from Backlog to In Progress on the Privacy Engineering board.

Following T262493#7584789 I've begun drafting a policy and collating feedback on the talk page:
https://www.mediawiki.org/wiki/User:Jdlrobson/Extension:Gadget/Policy

Perhaps we could combine efforts here?

Hello @Jdlrobson,
Thanks for looking at the problem statement and commenting here. It is my understanding that your aim is to reduce the amount of gadget-generated errors/noise in logs. And one way to achieve that is by providing gadget makers with guidelines, hence the policy you have drafted.

If my understanding is correct, I think your approach intersects, at least partly, with the one I am taking here, in particular regarding the need to provide gadget makers with a policy. I’m concerned with your scope being broader, but I always like the idea of combining efforts :).

Are there specific areas of your proposal where you think I could help?

If my understanding is correct, I think your approach intersects, at least partly, with the one I am taking here, in particular regarding the need to provide gadget makers with a policy. I’m concerned with your scope being broader, but I always like the idea of combining efforts :).

I think that's a fair concern. If we wanted to avoid this we could have a separate policy for security and I could modify mine to simply say "All gadgets must adhere to the security gadget policy" and link to it. However I think the two would eventually need to be tied together at some point, so the shared efforts around collecting and responding to feedback would likely be mutually beneficial.

Are there specific areas of your proposal where you think I could help?

I think a good starting point would be to make sure User:Jdlrobson/Extension:Gadget/Policy is updated so it respects the ideal world the security team would like to see.

  • One thing for example that I am interested in is whether we should be allowing code from untrusted domains and what is a trusted domain. Right now a lot of gadget code will be loaded from

other wikis, some of which we maintain and some of which we don't.

  • Should WMF developers/wiki script editors be checking this code for security issues? If so, how?
  • How should security issues be reported? Publically/ other mechanism?

My current plan for rollout is as follows:

  • Informal feedback round (in progress)
  • Update policy based on feedback.
  • New more formal feedback round from WMF staff e.g. please respond by X date
  • Update policy based on feedback.
  • A formal round of feedback from the community.
  • We'll update the interface to provide a notice on pages where JS can be added that links to the policy:
<div class="mw-message-box mw-message-box-notice">All code written here is expected to <a href="#">adhere to the gadget policy</a>.</div>

Screen Shot 2022-02-16 at 11.07.18 AM.png (1×2 px, 461 KB)

We had an RFC open about this for a couple of years, which has some analysis and discussion of legitimate use cases and UX for opt-in: T208188: RFC: Partial opt-out method for Content security policy

I'm confused by this problem statement. The Privacy Policy already forbids anything on Wikimedia projects that causes the UA to contact any third-party website, including Toolforge and WMCS, for any purpose (any HTTP header and the client IP are covered by the definition of "Personal information"). So regardless of whether it's executable code, an image, a webfont, JSON/JSONP data, etc. it is currently bright-line forbidden. What, then, would this "clear Wikimedia policy on the use of third-party resources" cover?

My current plan for rollout is as follows:

  • Informal feedback round (in progress)
  • Update policy based on feedback.
  • New more formal feedback round from WMF staff e.g. please respond by X date
  • Update policy based on feedback.
  • A formal round of feedback from the community.
  • We'll update the interface to provide a notice on pages where JS can be added that links to the policy:
<div class="mw-message-box mw-message-box-notice">All code written here is expected to <a href="#">adhere to the gadget policy</a>.</div>

Screen Shot 2022-02-16 at 11.07.18 AM.png (1×2 px, 461 KB)

Hey @Jdlrobson, thanks for sharing those ideas. I like the plan you laid out, in particular the suggested update to the interface of .js pages, as it seems to be a good way to educate gadget developers about best practices, and warn them eventually. It my understanding that the kind of support you'd want from the Security-Team will mostly include:

  • Looking into the security aspect of the Gadget policy draft (eg: should gadget allow code from untrusted domains (btw, do you have example?)
  • Provide ideas and guidelines for security best practices for gadgets
  • Share ideas about how security issues related to gadget could be reported in a safe way

Is my understanding correct?

Generally speaking, reviewing gadget's security does not fall within Security-Team's purview, but I'd be glad to speak with my team to explore if and how we may provide some support with respect to the areas I listed above. Meanwhile, let me know if I missed some points.

We had an RFC open about this for a couple of years, which has some analysis and discussion of legitimate use cases and UX for opt-in: T208188: RFC: Partial opt-out method for Content security policy

Hey @daniel, thanks for referencing the RFC. Indeed it has interesting points. It is my understanding that the conversation centered around CSP, but it is good to note that the need for having exemptions for accessing certain third-party resources has been repeatedly voiced for many years (other than T208188, T239077 also contains legitimate cases for exemptions).

I'm confused by this problem statement. The Privacy Policy already forbids anything on Wikimedia projects that causes the UA to contact any third-party website, including Toolforge and WMCS, for any purpose (any HTTP header and the client IP are covered by the definition of "Personal information"). So regardless of whether it's executable code, an image, a webfont, JSON/JSONP data, etc. it is currently bright-line forbidden. What, then, would this "clear Wikimedia policy on the use of third-party resources" cover?

Hi @Xover, thanks for commenting here.

When it comes to gadgets, I am afraid it is not as simple or clear as you say :).
First, it is my understanding that personal information can be shared with third-parties for particular purposes, with user’s permission (Cf. “With Your Permission” subsection of the Privacy policy). Whether a gadget informing users that it will load external parties can be defined as “having permission” is up for Legal to determine. But, the practice has been, at least in some cases (T65598), to grant exemptions to some gadgets loading third-parties, especially when they benefit tens of thousands of users.

Secondly, looking at the various conversations around Wikimedia projects loading third-party resources, there seems to be legitimate cases for gadgets loading third-parties resources (see description in T208188). Furthermore, there seems to be agreement around the fact that denying access to third party resources without providing users with alternatives is not desirable (T208188#6030446, T208188#6030030). As you probably already know, there are even ongoing efforts to govern access to third-parties and relevant exemptions with technical means (CSP), but even CSP does not currently have a policy specifically scoping its enforcement/exemptions, as noted in T239077.

In line with all the above, the problem statement suggests that there be a policy formalizing/giving guidelines for what is acceptable, and standardizing the exemptions with respect to gadgets loading 3rd parties — because in practice there seems to be a need for such exemptions and guidelines.

I hope the explanation above clarified things a bit. Let me know if you think there are some points I'm missing.

Hmm. Then I think the problem statement is a little bit the wrong way around: it reads as if the aim is to lock down a currently reigning "Wild West" state of affairs, but in light of your clarification it sounds like the focus is more to enable a use of third-party resources that are difficult or impossible to (legally) do today but which would be of benefit to the Movement. And, obviously, when enabling such use it is desirable to do so in a controlled way that prioritises privacy, is sustainable, does not negatively impact performance, and so forth; but this then is not the focus so much as the consequence.

I also seems like the scope of this problem statement is narrowed down to opt-in uses. That is, how should consent be obtained, and what are the limitations / guidance / best practice that apply even after consent is present.

Both of which are, I think, crucial clarifications. For one thing, it means there needs to be a very early conversation with Legal to clarify things like whether documenting use of third-party resources is sufficient or whether there needs to be an individual and active opt-in (pop-up dialog, cookie consent type thingy), and what would be the minimum principles from the Privacy Policy that would still apply also after opt-in. "What are the hard limits of the solution space?" Does each Gadget need to have a Privacy Policy, and display or link it in its opt-in interaction; or must link the Privacy Policy of the third-party service, which presumably must have one; or do we mostly care about shielding the Foundation from liability (for which purpose the opt-in itself is sufficient: it shifts the liability to the user).

That framing also means something for what kinds of technical facilities will need to be available in order to enable this. e.g. the various opt-in mechanisms discussed previously in the context of CSP (but CSP is now no longer the framing context, merely one tool in the toolbox). Or, if there is a need to isolate Gadgets that use third-party resources (that is, none of their code can execute before consent is obtained), it means they can't ask for opt-in themselves, which in turn means Mediawiki must provide the facility to do so. It also means we'll have a need for a concept for Gadgets that are enabled by default but still needs opt-in before executing, which will touch fundamental assumptions in ResourceLoader.

And I think that means we should collect use-cases in a structured way in the context of this task, rather than in the narrower and more technical CSP task. For example, my own immediate concern is the ability to provide a few specialised webfonts to all visitors to Wikisource (cf. T166138). Active individual opt-in will be impracticable for this use case, but a local (per-project) privacy policy addendum combined with a Google Fonts proxy might be one workable approach. This is very similar to all uses of third-party resources from Common.js/Common.css (must that be bright-line forbidden?). Or, put another way, not all third-party resources are the same, and not all uses of third-party resources have the same needs; but the problem statement should address that head on so that "third-party resource" isn't just an obfuscated way to say "javascript".

PS. T65598 is security-limited so most people here cannot see its content.

I've shared my thoughts on this before, but I'll try to summarise in one place here and in simple terms, as requested by @sguebo_WMF.

I think what we need is a hard technical limitation ("CSP") that prevents without exemption the loading of executable CSS and JavaScript from third-party origins. As a long-time gadget author and community member myself, I do not believe there are valid use cases that would become hampered by this. Any non-malicious script should be trivial to import as-needed. And besides, even if in some edge case this were inconvenient, I think the risk and cost is simply too great. It would be very hard to explain to a user what impact this has, not to mention the inherent need to want to lock this down for certain user groups or wikis, or approve it by hand for certain gadgets/origins, and thus add more complexity to the whole system.

In addition, I think we also need the same hard technical limitation on all other third-party connections that can leak PII and thus violate the WMF privacy policy. Such as API requests for data, and image/font requests. This would affect a number of prominent use cases where gadgets communicate with services in Toolforge and/or other third parties. I think it makes sense for us to offer a way for users to consent to specific domains to be connected to, for the purposes of data fetching only. The CSP system has specific rules for what kind of connections and requests are blocked vs allowed. This means we can allow users to consent to sharing specific information, without compromising the security or integrity of our website (e.g. no executable JS/CSS code, no implicit sharing of cookies, etc.).

The way I envision this would work, is that scripts and gadgets specify in their metadata which (if any) origins they require a connection to. Upon enabling such a gadget, we can provide a standard consent prompt that thus blocks enabling of the gadget via the user interface, unless they approve those domains. This standard prompt is essential as otherwise we create both a poor experience where doing the wrong this is easy (enabling a gadget that can't work), as well as likely subject ourselves to social engineering and competition in how convincing ad-hoc consent workflows become (e.g. "Go to this page and enter my domain in your allowed list"). That would also have the downside of leaving no public trail on-site in terms of which gadget needed it and who associated the domain with that gadget (gadget metadata has a public edit history, which our community can monitor). We wouldn't want third-party websites to start explaining how to add certain domains directly here.

The other benefit of such a standard prompt is that we can internally associate the permission with a "reason", e.g. "for gadget X". This trail is informative to users ("why is domain X allowed?") as well as for Security engineers investigating a compromise. We could also potentially revoke allowances if their associated resources no longer exist, no longer need it, or are no longer enabled.

Lastly, the standard prompt means that gadget developers have one less thing to take care of. They simply declare the metadata and the system takes care of the rest.

The above is distilled from T208188: RFC: Partial opt-out method for Content security policy.

Hello, some quick updates.

The feedback gathered here on Phabricator and through the TDF have surfaced the following points:

  • Third-party resources being loaded into Gadgets and UserScript can create security and privacy issues
  • There is a need to have policy focusing on how gadget and user scripts developers utilize third-party resources but administrative control alone does not address the issue: there needs to be some technical enforcement.
  • There is currently no consensus on exemptions. Some users believe there are legitimate use cases warranting exemptions while others simply do not.

In line with the TDF process, the proposal moved on to the Research and Prototype phase. Through that phase, the Security-Team collaborated with WMF-Legal to craft a draft policy, reflecting the points mentioned above. The goal was to create a baseline text for conversation, while making sure the draft is consistent with other Wikimedia policies.

Over the coming weeks, I will be liaising with a few trusted community members to have initial community insights and adjust the policy text accordingly. Because the TDF and Phabricator have mostly been collecting insights from staff, the intent with the initial community insights gathering is to bring a bit more community lens to that work before engaging in a larger public discussion on the policy.

I’ll share more updates here if and when they become available.

Reminder that T262493 is also in progress (although currently stalled) which also establishes a policy but rather than focusing on security and privacy issues, focuses on best practices and interactions between gadget developers and engineering staff. It would be great for these documents to eventually converge in some way. O

Realizing now I never replied to this. Answers inline.

My current plan for rollout is as follows:

  • Informal feedback round (in progress)
  • Update policy based on feedback.
  • New more formal feedback round from WMF staff e.g. please respond by X date
  • Update policy based on feedback.
  • A formal round of feedback from the community.
  • We'll update the interface to provide a notice on pages where JS can be added that links to the policy:
<div class="mw-message-box mw-message-box-notice">All code written here is expected to <a href="#">adhere to the gadget policy</a>.</div>

Screen Shot 2022-02-16 at 11.07.18 AM.png (1×2 px, 461 KB)

Hey @Jdlrobson, thanks for sharing those ideas. I like the plan you laid out, in particular the suggested update to the interface of .js pages, as it seems to be a good way to educate gadget developers about best practices, and warn them eventually.

This is T311891 - essentially linking to whatever "policy/guidelines" we have.

It my understanding that the kind of support you'd want from the Security-Team will mostly include:

  • Looking into the security aspect of the Gadget policy draft (eg: should gadget allow code from untrusted domains (btw, do you have example?)

Right now we have scripts loading from trusted sources such as redwarn.toolforge.org (https://en.wikipedia.org/wiki/User:RedWarn/.js) and untrusted (https://ce.wikipedia.org/wiki/MediaWiki:Gadget-addThisArticles.js loads code from https://s7.addthis.com for example which also leads to us logging their client side errors).

  • Provide ideas and guidelines for security best practices for gadgets
  • Share ideas about how security issues related to gadget could be reported in a safe way

Is my understanding correct?

I believe they intersect in that T262493 is trying to establish guidelines/best practices for gadgets to follow. Part of that should be to set expectations around security, so I'd imagine at some point the policy I'm working towards in T262493 would point to whatever policy is created here.

Generally speaking, reviewing gadget's security does not fall within Security-Team's purview, but I'd be glad to speak with my team to explore if and how we may provide some support with respect to the areas I listed above. Meanwhile, let me know if I missed some points.

My understanding was the third-party resources policy exists precisely because gadgets / site scripts can load code from any domain. We don't load 3rd party resources in any of Wikimedia's deployed code. Am I misunderstanding something? I'm not sure why gadget security would not fall into security team's purview, at least on the high level of where code can be loaded from..?

… I'm not sure why gadget security would not fall into security team's purview, at least on the high level of where code can be loaded from..?

Speaking as an on-wiki contributor that would love to be able to ask for security review of my own code, I think that's the crucial distinction: the security team cannot possibly handle actual code-level engagement with all the myriad community-developed code out there, nor even higher level for every individual Gadget. The Gadget functionality (and related plumbing) in MW and the security policy would presumably need to be a clear demarcation for the responsibility / scope.

Personally i am a bit doubtful of the policy approach - we already have implicit norms around this, and most of the failures that i am aware of are accidental not intentional (but i could be mistaken, jdlrobson adds some good counterexamples) I fear a policy would mostly be akin to telling people "don't screw up", which i don't really believe is all that helpful.

… I'm not sure why gadget security would not fall into security team's purview, at least on the high level of where code can be loaded from..?

Speaking as an on-wiki contributor that would love to be able to ask for security review of my own code, I think that's the crucial distinction: the security team cannot possibly handle actual code-level engagement with all the myriad community-developed code out there, nor even higher level for every individual Gadget. The Gadget functionality (and related plumbing) in MW and the security policy would presumably need to be a clear demarcation for the responsibility / scope.

So yeah, it does seem infeasible for the security team to code review every gadget (or pay pentesters to do so, although bug bounties could be cheaper but probably still pretty expensive [bug bounties have a lot of hidden non-obvious costs]). Although a certain amount of distinguishing should be made between provacy and security, as the former is a bit easier to do basic tests for.

I think the best thing we could do is some sort of sandboxing (everything krinkle said, 1000%. CSP isn't the only option here but it seems like the easiest and best). Sandboxing is the ultimate shift left.

But the second best thing we could do, which i think is under-discussed is educational - there is no reason why we should have to wait on the foundation to do security reviews themselves. We should train prominent community members who write gadgets about security (who in turn would hopefully train others in their communities). Or to translate into corporate speak - we should create a culture of security champions embeded in the gadget community to scale our ability to provide security for the website.

Just my 2 cents.

One thing we've done on enwiki is attempt to declare some information on all non-default gadgets that help anyone opting in to the decide if they want to accept certain risks associated with the gadget (see header of https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-gadgets). This sort of labeling is something that I could see becoming part of a policy.

On the other hand there were some prior suggestions about requiring interstitials when using gadgets that go to third parties - that is something users really don't want and will just lead them to fork the gadget, bypass that and end up with less security as their forked version become unmaintained.

Hey everyone, I agree that having the Security-Team review every single Gadgets and User script would not be scalable or even realistic.

But the second best thing we could do, which i think is under-discussed is educational - there is no reason why we should have to wait on the foundation to do security reviews themselves.

Having educational initiatives led by the community in privacy and security would definitely be a good thing for the Wikimedia ecosystem as a whole. Also, the idea of educating users is something the Third-Rarty Resources (TPR) policy supports as it aims at providing gadget and user script developers with best practices. Although its scope is much narrower, there would be room within the TPR policy for referencing or including more specific guidelines, as mentioned earlier by @Jdlrobson.

Personally i am a bit doubtful of the policy approach - we already have implicit norms around this, and most of the failures that i am aware of are accidental not intentional (but i could be mistaken, jdlrobson adds some good counterexamples) I fear a policy would mostly be akin to telling people "don't screw up", which i don't really believe is all that helpful.

@Bawolff sure, the Foundation’s current terms of user forbid violating the privacy of others, which implicitly covers gadgets and user scripts calling third-party resources. The added-value of the TPR policy would be to explain to people why they should not “screw up”, the risks in terms of security and privacy, as well as best practices.

Additionally, previous comments from @Krinkle and others have surfaced the need to enforce that policy using technical measures such as CSP, which the current draft has taken into account. Another key goal of the policy is to formalize the technical enforcement, and determine if and what exemptions should accompany that enforcement. As far as I understand, there were previous efforts to determine what form of CSP exemption “would be acceptable or desirable from the user's perspective” (T208188#6030030). The policy conversation could be an avenue to get community’s opinion on the exemption question as well, be it CSP-partial opt-out or interstitials, as noted by @Xaosflux.

As mentioned earlier, we're trying to tread cautiously and are still collecting feedback from a number of trusted contributors before releasing the policy draft for public discussion. If some of you would like to take a look at the current TPR policy draft and share your two cents, kindly let me know.

@sguebo_WMF given that the TPR policy is being proposed to be incorporated by reference into the terms of use, i think there is a desire for there to at least be a public draft, if not the final policy, prior to the comment period for the terms of use ammendments closing.

@sguebo_WMF given that the TPR policy is being proposed to be incorporated by reference into the terms of use, i think there is a desire for there to at least be a public draft, if not the final policy, prior to the comment period for the terms of use ammendments closing.

Hello @Bawolff, I understand your point but I am afraid this is not the intent at the moment. With respect to the Terms of Use update, I'll echo the comment from WMF-Legal on meta-wiki and note that the reference to the TPR policy in the ToU is a placeholder. For now, there is no plan to have the TPR draft released publicly prior to the ToU discussion.

There is currently no consensus on exemptions.

Eh.. the long-standing consensus has been that there are no exemptions. Is there an intention to change this?

Eh.. the long-standing consensus has been that there are no exemptions. Is there an intention to change this?

There's some ambiguity about what circumstances users can opt-in to stuff, if they have given informed(-ish) consent. For example, the map layers on wikivoyage, or [not default enabled] gadgets that embed google translate. I assume that that sort of thing is what is being referred to.

I think what we need is a hard technical limitation ("CSP") that prevents without exemption the loading of executable CSS and JavaScript from third-party origins.

I don't think that should be tied together with the policy discussion at all. It's obviously necessary to prevent intentionally malicious gadgets (which a policy wouldn't), attacks utilizing external executable code have a much much higher impact than attacks utilizing other third-party resources, and there isn't any current use case that couldn't be easily substituted; and it's a fairly simple technical change, unlike the evisioned opt-in tracking system. It should just be done.

Eh.. the long-standing consensus has been that there are no exemptions. Is there an intention to change this?

There's some ambiguity about what circumstances users can opt-in to stuff, if they have given informed(-ish) consent. For example, the map layers on wikivoyage, or [not default enabled] gadgets that embed google translate. I assume that that sort of thing is what is being referred to.

That has been my understanding as well. There would be a special page that you go and explicitly enter the domain you want to exempt for your account and part of installing such gadgets would be to go there and set it. (Possibly we can make it as tightened security mode = you need to enter your password again).
Obviously there won't be any way to make an exception as default for everyone.

This is not that hard to implement tbh, I'm willing to spend some volunteer time to test, review and even make patches and be done with this really important security and privacy enhancement.

There would be a special page that you go and explicitly enter the domain you want to exempt for your account and part of installing such gadgets would be to go there and set it.

For this to be viable in practice this needs to be "GUI-able" for Gadgets. I.e. that they can prompt you for the opt-in, and the user click a check box ("Whitelist toolforge.org?"). Which probably means this facility needs to be provided by MW. Think "the OAuth confirmation dialog". Anything that requires manual editing of a MediaWiki:Whitelist-esque whitelist is a no-go for anything except a personal user script (possibly shared with a few other techies, but not for the general user base).

Incidentally, there is a similar need to address cross-project loading of executable code. There are large projects currently cross-loading (obfuscated) javascript from projects whose user base is mostly located in countries whose governments are authoritarian and known for cyber attacks and attacks on dissidents. That's all within the WMF family, but for actual users that's a much higher real-life risk than sending your User-Agent header to Google's web fonts server by way of a Toolforge-hosted proxy run by an already NDAed volunteer (cf. T166138).

There would be a special page that you go and explicitly enter the domain you want to exempt for your account and part of installing such gadgets would be to go there and set it.

For this to be viable in practice this needs to be "GUI-able" for Gadgets. I.e. that they can prompt you for the opt-in, and the user click a check box ("Whitelist toolforge.org?"). Which probably means this facility needs to be provided by MW. Think "the OAuth confirmation dialog". Anything that requires manual editing of a MediaWiki:Whitelist-esque whitelist is a no-go for anything except a personal user script (possibly shared with a few other techies, but not for the general user base).

MediaWiki:Spam-Blacklist and such should have not existed like that in the first place and I hope to fix it soon. I'm not proposing that. What I'm proposing is more of Special:Tags kind of special page (that's what you see if you're admin):

grafik.png (298×852 px, 19 KB)

Anything that would be frontend-only and specially "I approve" kind of dialog, can decrease the security drastically. It's a trade-off and I recommend on skewing towards security rather than convenience.

It depends on how do we want to treat e.g. meta. My understanding has been that meta and other wikimedia projects (maybe at least large ones) shouldn't be affected by CSP as if meta gets compromised, we would have bigger problem than some user's js getting compromised in a random wikipedia. But I might be quite wrong.

Incidentally, there is a similar need to address cross-project loading of executable code. There are large projects currently cross-loading (obfuscated) javascript from projects whose user base is mostly located in countries whose governments are authoritarian and known for cyber attacks and attacks on dissidents. That's all within the WMF family, but for actual users that's a much higher real-life risk than sending your User-Agent header to Google's web fonts server by way of a Toolforge-hosted proxy run by an already NDAed volunteer (cf. T166138).

Can you elaborate more on this? Specially examples of risky js code. In private, not publicly.

The plan is always to reduce the attack vector, you never can make it zero and that problem might require a different solution altogether (partially achieved by separating admin rights with interface admin rights that happened a couple years ago)

Anything that would be frontend-only and specially "I approve" kind of dialog, can decrease the security drastically.

Provided I understand your reasoning correct, I disagree. A properly designed visual interaction is going to be vastly more secure for verifying informed consent than any approach that involves users cut&pasting text strings they do not understand into some magical special page they have never heard of. In a properly managed visual interaction you can require the display of a link to a relevant privacy policy, links to a user-understandable description of functionality and risks, and a clear user action to opt in to this. Cut&pasting text strings is just creating more attack surface for social engineering attacks.

You also need some way to manage already given permissions and revoke them, for which something Special:Tags-esque would be good (but even better would be in #mw-prefsection-gadgets, because that's where normal users have any chance of ever seeing it).

Can you elaborate more on this? Specially examples of risky js code.

The code is risky only in that it has been minified (so I can't easily see what it does) and the WMF project it is hosted on is 1) one at high risk of attempts at subversion by APTs and nation level threat actors, 2) is not the project on which the code is used so normal community-based monitoring (watchlists etc.) would not catch (possibly malicious) modifications. It's like all the projects that have a local Gadget that cross-loads HotCat from Commons, except minified and more geopolitically fraught.

Hey there -- just a heads-up that I have started compiling some data on Gadgets and User scripts loading third-party resources across Wikimedia projects in T335892. This may help get a sense of the impact of the policy. The initial data is probably off/incomplete. So any ideas to get more accurate data is warmly welcome :)

Incidentally, there is a similar need to address cross-project loading of executable code.

As long as projects as CORS-allowing each other, I don't think crossloading makes much difference in terms of attack surface.

A properly designed visual interaction is going to be vastly more secure for verifying informed consent than any approach that involves users cut&pasting text strings they do not understand into some magical special page they have never heard of. In a properly managed visual interaction you can require the display of a link to a relevant privacy policy, links to a user-understandable description of functionality and risks, and a clear user action to opt in to this. Cut&pasting text strings is just creating more attack surface for social engineering attacks.

+1, users will copy-paste strings just as easily as click buttons, and with the dialog approach you can present contextual information that's helpful. Also, showing a privacy policy link or some kind of boilerplate that the user accepts might well become a legal requirement - using external resources means the user willingly steps outside the boundaries of the default privacy policy.

sguebo_WMF changed the task status from Open to In Progress.May 17 2023, 5:37 PM

Hello — just a heads up that the policy draft will be released publicly for discussion next week, on June 5th, as part of the official consultation. When the policy discussion opens, there will be an announcement through the usual channels: wikimedia-l, IRC, etc. You can find more details about the upcoming steps and dates of the consultation in the subtask T337863.

The policy draft is now publicly available for feedback on meta-wiki. Hope to hear your thoughts there!