Page MenuHomePhabricator

Security recommendations for new services
Closed, ResolvedPublic

Description

Most of our documentation on MediaWiki is PHP and MediaWiki core/extension specific. We have few explicit recommendations for new services.

We should have some guidance for teams on setting up:

  • appropriate segmentation
  • CSP and other headers
  • appropriate use of CORS
  • when to use OAuth

Some of this will change based on the outcome of the SOA Auth RFC, but we should start with some basic guidance now.

Event Timeline

csteipp claimed this task.
csteipp raised the priority of this task from to Needs Triage.
csteipp updated the task description. (Show Details)
csteipp subscribed.
Qgil triaged this task as Medium priority.Jan 12 2015, 7:58 PM

Please confirm whether you want to run this session at MediaWiki-Developer-Summit-2015 by placing this task in the most appropriate column at the workboard and scheduling it at https://www.mediawiki.org/wiki/MediaWiki_Developer_Summit_2015#Schedule

One of the biggest questions I would like input from Ops (@faidon?) and the services team (@GWicke?) on is what we can expect / recommend as far as segmentation goes.

When we talk about "Services", I think most people think of parsoid, etc-- running on a set of servers separate from the "normal" cluster, maybe behind its own varnish/cache, and does not have access to the MediaWiki code base, or main databases. However, it does share a domain name with an existing MediaWiki instance, or at least a top level domain.

Can we assume new services will have their web servers run on servers other than the main appservers? Can we by default disallow network access from these servers to the rest of the cluster? If not, can we require that they run with different, non-privileged users? Or can we contain them with some combination of containers and mandatory access controls?

Can we by default assume that new services will not have access to the private wikimedia repo (db passwords, keys, etc)?

If a new service is going to open up a public entrypoint, should we have new services run on a different domain (or subdomain) than the wiki?

Can we assume new services will have their web servers run on servers other than the main appservers?

For serious installs (WMF or other big users), yes (HW or containers). For hobby users, it'll probably all run as separate OS users in the same VM. For example, parsoid runs as user 'parsoid' by default.

Can we by default disallow network access from these servers to the rest of the cluster?

We can definitely segment networks to improve security, but should not rely on that as the only defense.

Can we by default assume that new services will not have access to the private wikimedia repo (db passwords, keys, etc)?

We should definitely make sure that each service only has access to its own secrets.

If a new service is going to open up a public entrypoint, should we have new services run on a different domain (or subdomain) than the wiki?

This is mostly a client-side issue. With content that is going to be used in authenticated views (ex: VE), we need to ensure proper sanitization in any case. RESTBase can help with this for HTML and potentially other content types. I doubt that we want to compromise on this, but if we wanted then we should certainly do so using a different domain.

If a new service is going to open up a public entrypoint, should we have new services run on a different domain (or subdomain) than the wiki?

This is mostly a client-side issue. With content that is going to be used in authenticated views (ex: VE), we need to ensure proper sanitization in any case. RESTBase can help with this for HTML and potentially other content types. I doubt that we want to compromise on this, but if we wanted then we should certainly do so using a different domain.

Yes, it's client side, but that's also by far the largest category of issues we're dealing with. Segmenting some/all services onto another domain means an xss there will have a very low impact on the rest of our sites, and the interactions can be set by policies like CORS / CSP.

Would ops have strong objection to supporting a top-level domain for services that show user content, intended to be given to our end users? I'm thinking of services like Language's cxserver, which allowed for reflected xss, and there was no reason it needed to be hosted on the same domain as any of our wikis. A separate domain would have allowed us to just deploy it and fix the low-severity xss at a later time, instead of having to delay rolling it out to production.

Please update the description with the achievements of this session. Thank you in advance.

I did an initial condensing of my points, and the feedback from the session in https://www.mediawiki.org/wiki/User:CSteipp_(WMF)/ServiceSecurity