Documentation on wiki
Tests are a crucial part in the development flow to verify that the code works as intended and is future-proof against changes of behavior. With the current stack tests take a long time to run (because of the ResourceLoader strain and the overhead of running through MediaWiki). As a result of the cost of running tests and the slow feedback cycle during the development flow, we haven’t kept up with properly testing the code, and this has negative effects on feature stability and resilience, and on confidence in changing the code.
Leveraging the Automatic bundling goal, we will follow the Extension:Popups approach and migrate as many unit tests as we easily can to run in the CLI (Node.js based). Tests that make extensive use of MediaWiki or are too tied to ResourceLoader will be considered integration tests and kept in mediawiki-qunit.
Untested files simple enough to test will have new tests added in the process of migration for Automatic bundling.
Progress can be measured by looking at the amount of files migrated to the src/ folder, like in the previous sub-project (automatic bundling), and the amount of test files moved or left from tests/qunit to tests/node-qunit.
Currently, we have low test coverage and are unable to accurately measure test coverage. Test coverage will be measured using istanbul as files are migrated to node-qunit and we’d aim for ~75% coverage.
Measuring progress of increasing code coverage
"speeding up unit test execution" gave us the power to measure code coverage from the command line. We can now monitor progress much more carefully by looking at overall code coverage. This is documented in:
At the end of the project we will demonstrate which components were poorly tested but are no longer in
We will measure which files are tested, and how long it takes to run the tests in the current setup (page load + text execution), and compare it with the files tested at the end, and the test execution time (cli run + test execution). We will compare both cold start, and “dev” mode (watching files to re-run tests as code changes).
We will be successful if we have more files tested, we are measuring test coverage, and the tests run faster.
We will monitor number of client side errors. As we add unit tests, we hope to see the number of errors decrease as we close out edge cases.
- We have a measure before the project starts about how many files are tested, and how many tests are run, and how long they take to run in some specific machine (we’ll have to use the same machine at the end of the project to measure the new setup)
- A node.js qunit script is added to the npm scripts under test:node (Using the qunit package either directly or via mw-node-qunit)
- The node.js tests run on CI per commit via the npm test job
- The existing browser qunit tests are migrated to tests/node-qunit (where it makes sense) and run under Node.js, using ES modules (see Extension:Popups)
- There is a coverage npm script that when run generates test coverage reports
- We have measured the new setup and compared the info with the old setup, and reported on success metrics