Right now the Jenkins job for selenium tests roughly runs like this:
1. Clone MediaWiki core, vendor, and some skins + extensions. (One variation skips 'vendor' and runs `composer install` instead.)
2. Install MediaWiki: (automatically detects cloned extensions from disk and includes them in LocalSettings).
3. Run `npm install` in mediawiki/core (this brings in webdriverio and any other packaged used by core's tests.)
4. Run `npm run selenium-test` in mediawiki/core. This runs webdriverio with the wdio.conf.js file from mediawiki/core@master.
The wdio.conf.js file in mediawiki/core hardcodes paths to certain wmf-deployed extensions from $IP/extensions.
#### Problems
* When we add an extension to the shared gate job, we automatically pick up its QUnit and PHPUnit tests, but the selenium tests need to be registered manually in mediawiki/core. This is contrary to separation of concerns and is a maintainability hazard. It'll likely get out of sync, and not to mention release branches and wmf branches all needing to be kept in sync.
* When an extension that isn't part of the shared-gate job wants to use selenium tests, we presumably have to come up with another strategy (or maybe we have one already, but don't use it for the shared-gate job?).
* The tests from extensions are unable to use packages specified in their `package.json` file because we only run `npm install` on the mediawiki/core directory.
I suspect this may have been done intentionally to avoid conflicts or other run-time problems, but the thing is... that can't actually happen in Node.js.
##### Local dependencies
In fact, the current situation (described below) is very unusual for Node CLI programs. It doesn't make sense for program A (from extensions/CirrusSearch.git) to run in context of program B (from core.git) in a way that only allows program B to have dependencies.
Nodejs uses relative discovery for its import paths and we should make use of them. Not doing that causes a maintainability hazard. Similar to the problems we had with our PHP and JavaScript linting pipeline before 2013. But we've solved that, and we shouldn't make the same mistake again.
In our development tests for PHP and JS, extensions control their development dependencies in package.json and composer.json. Typically used for PHPCS, ESLint, Grunt and others. They specify the names and versions of packages as needed, and any additional utilities and plugins. This means if we want to use a newer version of something in core, we can do so. If such an upgrade requires code updates, we do the update in core, and then that upgrade is finished and can be merged. Other developers in other repositories can choose to do the same, when they want, if they want, based on their own priorities. Either way, nothing breaks.
The current selenium tests, however, run all tests from mediawiki/core with only core's dependencies fetched. This is already causing stagnation now (with the addition of cucumber support T179190). But right now everything is still using the same/initial version. A few months from now, we'll find something we want to change or upgrade, and then we'll be stuck having to atomically upgrade everything everywhere at the same time, which will be frustrating and unrewarding.
Now, one might wonder: Why would that be. Don't we want consistency? Of course we do. Take a look at our PHP and JS linting pipelines. Quite consistent there. Mostly latest phpcs and eslint with the same settings and presets. But we only got there, and we keep getting stricter and better and newer because the changes can happen gradually across the repositories, everyone at their own pace. We can try something, find regressions, take them one at at time.
Doing it globally doesn't work. It didn't work when we installed JSHint globally in Jenkins. It wouldn't have allowed us to migrate from JSHint to ESLint. It wouldn't have allowed us to keep making improvements to the codesniffer and eslint rules etc.
#### Currently
* mediawiki-core/
** node_modules/ (from core's package.json)
** tests/selenium/specs/
*** test.js (sees core's node_modules)
** extensions/
*** CirrusSearch/
**** tests/selenium/specs/
***** tests.js (sees core's node_modules)
This is presumably why we keep adding various unused packages to core's package.json because an extension needs them.
##### Proposed
Quite simple actually. The Jenkins job simply needs to run `npm install` in each of the installed skin/extension directories. I believe the Jenkins job already has logic for this iteration in order to append the contents of `tests/selenium/LocalSettings.php` from each repository.
End result:
* mediawiki-core/
** node_modules/ (from core's package.json)
** tests/selenium/specs/
*** test.js (sees core's node_modules)
** extensions/
*** CirrusSearch/
*** {icon plus-square color=green} node_modules/ (from Cirrus's package.json)
**** tests/selenium/specs/
***** {icon exclamation-circle color=blue} tests.js (sees Cirrus's node_modules, then core's node_modules)
Once this done, extensions can specify the current version of `wdio-mediawiki` they are written for. This allows us to iterate on that library over time. See T193088 for more about that.
This also means an extension can make use of MWBot without falling back to core's version of that library, which can unexpectedly disappear or upgrade in a way that may be incompatible.
To clarify: This task does not propose to run tests from the extension directory. We can revisit that another time. This proposal is fully compatible with our current way of running tests from mediawiki-core's `tests/selenium/wdio.conf.js` file. It is entirely normal and supported by Node.js to include additional files at run-time each which will see their own dependencies, even if other contexts may have the same dependencies or different versions of them. That's fine, intended, expected, and harmless :)