Page MenuHomePhabricator

Introduce a Front-end Build Step for MediaWiki Skins and Extensions
Open, LowPublic

Description

[[ This document is still a work in progress; expect changes ]]

What is the problem or opportunity?

Enable MediaWiki developers to automatically compile front-end assets (scripts, styles, templates, etc) as part of the deployment or CI process for skins and and extensions.

Requirements
  • Developers can run build scripts automatically in the CI process, without having to commit derivative assets to version control. Some extensions area already using a build step, but they must commit the output to source in order to deploy it. This clutters up code review and can produce conflicts in VCS. Running a build script automatically in CI would avoid the need to commit compiled assets this way along with the associated problems.
  • Teams have some flexibility to define their own build scripts on a per-project basis. Not every team is going to use the same tools. Some teams may wish to use tools like Typescript; others won't. While there may need to be limits on what kinds of scripts are allowed (see next requirement), the infrastructure should allow a good amount of flexibility. Letting teams define a build script in a location like package.json seems like a good idea.
  • The process must not introduce security vulnerabilities: Flexibility is important, but so is security. The current trend in the wider front-end development community is to rely on an elaborate tool-chain, using software like Webpack along with numerous plugins to generate a large bundle of compiled, obfuscated code that is then shipped to browsers. If un-audited packages with large numbers of dependencies participate in the build process, it becomes hard to guarantee the security or stability of the final output. Teams should have some flexibility in terms of tooling choices, but there should also be limits on what packages can be used. Developers could maintain a list of approved packages or even a private NPM registry to help mitigate security risks.
  • The build process should be deterministic. The same input should produce the same output at all times.

Use cases

What does the future look like if this is achieved?

Achieving this means that developers can rely on an automated build step in production.

The benefits include flexibility and improved developer experience (allowing developers to write code in Typescript or more modern versions of JS) as well as performance optimizations (tree-shaking of external dependencies, Vue template pre-compilation) to send more efficient payloads to the user’s browser. A given extension or skin could specify a build script in a place like its package.json file, without the need for developers to commit compiled code into version control.

What happens if we do nothing?

Doing nothing means that we give up some opportunities to improve the site performance for users and continue shipping unnecessary code in many places. We also will continue to limit developers in that the code they write must be the exact code that is run in all browsers. This means no ability to rely on tools like Typescript (very useful for reducing bugs in software) or to automate the process of supporting legacy browsers. Developers in the wider web community are increasingly used to having these tools, so this will be one way that we continue to separate ourselves from wider trends and best practices.

Any additional background or context to provide?

This work should be seen as the next stage in the larger Vue.js migration / front-end modernization project. The original Vue.js RFC mentioned this task as one potential follow-up. The original Build Step RFC from 2018 is also still valid and provides good background on why this is important to have.

Why are you bringing this decision to the technical forum?

This problem cuts across the concerns of many different teams (product teams, security, release engineering, etc); similarly, any solutions we adopt will impact many teams downstream – enabling or limiting some of their options in terms of development tooling. It is essential to have input from all impacted groups to come up with an adequate solution here.

One example that is worth noting in particular: the Release Engineering team is preparing to roll out some new container-based infrastructure to support the deployment process. It is possible that a front-end build step could "piggyback" on this new infrastructure, providing improved developer experience across all teams without requiring a huge amount of additional work.

Additional resources

Related Objects

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
bd808 updated the task description. (Show Details)

I would suggest that, > The build process should must be deterministic. The same input must produce the same output at all times.

Otherwise we get a bunch of dirty diffs.

Otherwise we get a bunch of dirty diffs.

Unless I'm misunderstanding the goal or your objection, we wouldn't be getting dirty diffs because we'd not be committing the output of the build process.

I agree that the builds should be as deterministic as possible, of course, just not for that specific reason. 😁

If un-audited packages with large numbers of dependencies participate in the build process, it becomes hard to guarantee the security or stability of the final output.

It seems like the proposed solution would eliminate our current ability to do this? As stated above:

A given extension or skin could specify a build script in a place like its package.json file, without the need for developers to commit compiled code into version control.

I will concede that our application security automation within the context of CI is... lacking, to the point of being almost non-existent. Improving that (which is a part of the Security-Team's current roadmap) could help reduce the risk (and pain) of the current, manual process, but not if pushing production artifacts through gerrit or similar code review is forgone. Would there be an alternative proposal for verifying the integrity and security of builds that would be as visible as the current process and wouldn't rely solely upon trusting a single engineer?

Teams should have some flexibility in terms of tooling choices, but there should also be limits on what packages can be used. Developers could maintain a list of approved packages or even a private NPM registry to help mitigate security risks.

This is a good suggestion, though I'd personally advocate for a more narrow, paved road approach, where a single option was viewed as a trusted standard (which seems to be happening sort of organically in T272879 and T276366) and other options would be rated with higher risk, to be accepted at various levels (director, vp, c-level).

It seems like the proposed solution would eliminate our current ability to do this?

I think that finding a way to balance flexibility and security here is one of the biggest problems we need to solve as part of this process.

Teams should have some flexibility in terms of tooling choices, but there should also be limits on what packages can be used. Developers could maintain a list of approved packages or even a private NPM registry to help mitigate security risks.

This is a good suggestion, though I'd personally advocate for a more narrow, paved road approach, where a single option was viewed as a trusted standard (which seems to be happening sort of organically in T272879 and T276366) and other options would be rated with higher risk, to be accepted at various levels (director, vp, c-level).

This is exactly where I see things heading in terms of a potential solution. Perhaps we'd provide a very limited list of pre-approved packages (Rollup, Typescript, some plugins, etc.) that still provide a lot of capabilities to developers; if teams needed anything beyond that, they would need to go through the standard security review process and get approval for higher-risk options.

I'm not convinced by the "don't commit the result" part. The "compiled" code is still needed for tarballs and even the developers themselves. And they must be using the same version as in prod, or they could be testing a slightly different code, which would be hard to discover.

It seems that the goals could be met as well if:

  • gerrit (in the future gitlab) hides those generated files so they are less prominently shown than 'normal' files (cosmetic, but nice to have)
  • using a merge strategy which ignore such files (interestingly, this may need the merge process to regenerate them, that could be non-trivial)
  • jenkins validates that the committed compiled code match the source files (thus addressing build mistakes or environment deviations)

It seems this would solve the main issue (the merge conflicts) while not changing the general way everything functions. I admit it's a bit ugly to commit generated code into the repository, but it also provides a consistent view of what was provided by the developer. Compare with e.g. an automatic bot commit which nobody will check. If someone compromised jenkins, it would get unnoticed, whereas having the developer (through a pre-commit hook, probably) include the generated code would highlight that difference (you would need to compromise jenkins and every developer install). This is a steeper path for casual developers as they would likely need the full build environment even for trivial fixes, but requiring that would seem the right approach.

Otherwise we get a bunch of dirty diffs.

Unless I'm misunderstanding the goal or your objection, we wouldn't be getting dirty diffs because we'd not be committing the output of the build process.

Sorry to be cryptic with my comment, I wanted to avoid getting into any potential implementation details because I don't know of any "nice" way to store the binaries so that they change atomically along with the source code, but are not a constant annoyance and a cause of merge conflicts. But I believe that whatever we do with the binaries, it's critical that the compilation is deterministic. This is the only way to guarantee integrity, that I'm aware of. However, I was surprised to see on the reproducible builds article that only 90% of Debian packages satisfy this criterion, so maybe there's an alternative way to guarantee that the compiled target matches the source?

I would suggest adding something along the lines of "independence" to the requirements. If we need to make an urgent change and deploy it, we should not be at the mercy of the uptime of npmjs.com or other external websites.

This is one of the benefits of keeping modifiable (non-minified/obfuscated) sources in Git - we can just patch them ourselves.

I'd also recommend reading through https://www.mediawiki.org/wiki/Requests_for_comment/Composer_managed_libraries_for_use_on_WMF_cluster and it's related discussions, I think the problems we looked into and addressed at the time are still relevant.

+1 to that. We do have our own apt, our own docker registry, and some more. We certainly can have our own npm registry as well (which I think is possible, correct me if I'm wrong), OT: I like to do the same for pypi so we stop shipping wheel binaries to production as well. But it would bring the whole discussion around processes and how to vet and push packages to our registry and I assume it should be determined beforehand.

I would suggest adding something along the lines of "independence" to the requirements. If we need to make an urgent change and deploy it, we should not be at the mercy of the uptime of npmjs.com or other external websites.

This is one of the benefits of keeping modifiable (non-minified/obfuscated) sources in Git - we can just patch them ourselves.

I'd also recommend reading through https://www.mediawiki.org/wiki/Requests_for_comment/Composer_managed_libraries_for_use_on_WMF_cluster and it's related discussions, I think the problems we looked into and addressed at the time are still relevant.

These are all good points to keep in mind, agreed. The way Composer dependencies have been handled previously should definitely inform our approach to a front-end build step.

Maintaining a WMF npm registry is certainly one option to consider here. Tools like Verdaccio might make it easier to self-host such a registry.

Another approach to consider would be using NPM's bundledDependencies feature to ensure that copies of important dependencies are included in MediaWiki tar files.

I would object to using any build step that depends on downloading assets from the internet at build time. We do that at the moment for many projects and we're aware it's completely wrong and needs fixing.

So I would really avoid expanding that practice.

If we want to have a merge-time build step, security and consistency are requirements, and the only way to ensure those is if we provide an artifact repository that we trust (possibly not just for npm, but also for php/composer, python/pypi, etc). I think that if we go down this route, setting up one is a requirement. To be clear: I don't think we should just use a simple caching proxy (like verdaccio is), but rather a full artifact management system, where we explicitly upload the dependencies that we use when building.

Also: we should probably be smart about how/when we do such a build in CI, so that we can change the mediawiki configuration without having to go through the build step.

Leaving a drive by comment for a resource that bd808 shared in an internal chat channel:

https://github.com/cncf/tag-security/tree/master/supply-chain-security

See also T199004 for basically the same request 3 years ago. Feel free to merge that task into this with any related discussion points.

Adding Design-System-Team because Design-Systems-team-20200324-20220422 got archived though this open task has no other active project tags associated, so it cannot be found on boards.

I took a look at what it would take to switch Popups from Webpack to the existing packageFiles+ResourceLoader+ES6 setup. I'm sharing it as it's an example of a build step that downloads nothing from the internet (all the code it uses is checked into the repo) and would be useful.

The build step here scans JS files for require statements and uses them to construct complex packageFiles definitions inside extension.json. It uses a local version of the acorn npm library (and is likely simple enough that it could be rewritten to not use that):
https://gerrit.wikimedia.org/r/c/mediawiki/extensions/Popups/+/812973

@egardner @LNguyen @kchapman could I ask for a status update on this proposal, please? What is the next step in the TDF process? I think there are a lot of people who would like to see some form of a front-end build step implemented, but we need to build consensus on how that should be done, and what should/shouldn't be possible in this build step.

kostajh updated the task description. (Show Details)

Some implementation suggestion that I think might help narrow the discussion and get this to a more actionable place.

  • A maintenance script in MediaWiki core is responsible for the compilation step. That means that any third party JS code needed for a compilation step is committed to core (see also T328699: Consider including a JS runtime as part of MediaWiki).
  • For now, the compilation step only concerns itself with compiling TypeScript
  • Extensions/skins wanting to make use of this compilation step would follow certain filename/directory conventions to opt-in to this step

That would also mean:

  • no npm install for core/extensions/skins as part of deployment
  • no arbitrary compilation steps for core/extension/skin code

As part of deployment (beta cluster, CI, and production), Quibble (CI) and scap (beta? production) would be responsible for running e.g. php maintenance/run.php compileAssets

@egardner @LNguyen @kchapman could I ask for a status update on this proposal, please? What is the next step in the TDF process? I think there are a lot of people who would like to see some form of a front-end build step implemented, but we need to build consensus on how that should be done, and what should/shouldn't be possible in this build step.

Hey @kostajh, the Design Systems Team is currently regrouping on this and discussing a new front end modernization initiative that would include some kind of front-end build step and a solution for server-rendering Vue (and Codex) components, which we think have enough overlap to be considered as 2 phases of a single initiative. We're in the very early stages but plan to get others involved as soon as we have a basic proposal, so that we can gather feedback and iterate on that proposal together. I agree with everything you've said here—we need to build consensus and ensure that any proposed plan would actually solve the problems we all currently face.

We'll share more publicly ASAP. In the meantime, thanks for your comments and for nudging us on this!