As part of our overall effort to improve the visibility and shared ownership of browser tests, we'd like to provide teams a simplified view into the current and historical state of their browser tests. It's not yet clear whether this can be achieved with the existing capabilities in Jenkins or whether a new dashboard will need to be created elsewhere—such as a standalone dashboard similar to https://integration.wikimedia.org/zuul/ or a Phabricator application.
- Current test results should be displayed in channels that are used in everyday team communication so as not to be missed or ignored.
- Historical results should be available as a timeline to show feature coverage and pass/failure trends.
- It should be easy to correlate test failure with commit history so that bugs may be quickly squashed or tests refactored.
- Results should link to verbose build logs in Jenkins and/or job replays in SauceLabs.
- Displaying the current status by feature and scenario, instead of by Jenkins build, may promote use in acceptance testing.
Jenkins Dashboards + Team IRC Notifications
We may not need to build out something completely new here. Jenkins already provides dashboard views that can be customized to show current and historical browser test trends, and we're already looking to make failures more visible by notifying team directly through IRC (see T89375). This solution paired with some best practices communication to PMs might satisfy all but requirement #5 albeit it not in the cleanest/prettiest fashion, but it would entail a lot less work and involve less risk of duplication.
Standalone Dashboard using Logged Metrics and Results
A custom dashboard that consumes metrics (perhaps via elasticsearch) would grant us a clean slate for presenting more usable/understandable graphs and results than what we currently have in Jenkins. Requirement #5 might still be difficult, however, as the Cucumber results would remain unstructured as Jenkins text logs, unless we can log a structured version of them to elasticsearch and consume/present them via the dashboard.
Publish to Custom Phabricator Application
One (very ambitious) idea was to write a custom Phabricator application that provides an endpoint for test results to be submitted and presents them under corresponding project pages. One advantage to this approach over the others would be that the test result data can be easily correlated with responsible commits (important as long as browser tests remain post-merge/non-voting) and possibly even presented alongside the Diffusion commit tree. Another advantage would be the visibility and close proximity to tools that teams use for workflow, an important aspect for facilitating future acceptance testing efforts—browser tests at that point could represent true acceptance tests and integrate with acceptance criteria in Phabricator.
Some big disadvantages to this option include the initial build investment and ongoing maintenance burden that comes along with building a Phab plugin. There's also the risk of building with a focus on acceptance testing when the practice of ATTD hasn't really been pursued in earnest by any team as of yet (that ol' cart before the horse thing).