The following is a checklist for completion of the definition of the Epic. Make sure to check these off as you complete each item.
- Success Metrics
- External Dependencies
- Product Plan
- User stories
- User Story Phab Tickets
- Metrics Implementation
- Metrics Phab Tickets
- Delivery Date
As a Wikipedia iOS project contributor, my changes are automatically deployed to testers after being merged.
Internal readership goal (see Q1 themes)
There have been a number of incidences shaking developers' confidence in the stability of the codebase, especially the 4.1.0 & 4.1.6 releases, but also minor things like basic assumptions about legacy code behavior. We hope setting up continuous integration will give developers immediate feedback on and incentive to write unit tests.
Feedback on changes which includes test results, test coverage, and code quality metrics should give a clear signal on the impact of those changes, as well as an overall quality trend for the codebase. This should help developers manage and gradually improve reliability while amortizing tech debt.
- T105351 Test coverage: 18.06%
- Number of unit tests: 189
- T105351: Gather metrics as part of testing/CI Done via codecov
- Developer confidence take a baseline survey, and/or use notes from previous retros (coordinate w/ max binder to integrate into health check on July 29th)
Abandoned code quality since OCLint is not cooperating with our project at the moment. Might revisit later.
- [ ] T106418: Code quality (see cyclomatic complexity, function length, etc. see OCLint for details)
Given I am working on the Wikipedia iOS repo
When I submit or update my changes for code review
Then CI should run a job configured by iOS engineers
And the job should lint & test the changes
And when the job finishes, I should post results back to the patch (+/- 1 CR and -1/+2 Verified)
Metrics we'd like to know about, but aren't sure how to measure.
- Code review duration
- Don't know how to get baselines for this (from Gerrit, ssh API?), more familiar w/ GH API (might even already be available)
- Defect rate
- Not only is this hard for us to measure, but our ability to find & report bugs is a bit lacking at the moment—so a positive trend might actually be a good thing
- Community engagement
- Mainly, we'd like to know if it went down as a failure metric, but like CR duration, could be hard to get baselines
Release engineering (Gerrit, Zuul) or Travis CI
External dependency reliability (i.e. Travis and Coveralls/Codecov uptime).
Following our discussions at the Lyon Hackathon (T98974) and based on Readership's Q1 theme to "Improve developers' ability to develop features quickly and reliably to serve readers across desktop, mobile web and apps" we're going to get our CI infrastructure to an "MVP" state. GitHub and Travis were chosen as a pragmatic way to make these improvements while minimizing the load on the iOS and other teams.
Prototyping Travis by setting it up on a fork of GH repo. If that goes well, move main dev workflow to GitHub and setup Travis there.
@BGerstle-WMF set up a fork of the Wikipedia iOS GitHub repo to work with Travis CI. Results were encouraging:
(See blocking tasks)
(See blocking tasks and their blocking tasks)
Once the 4.1.7 update is released (T106106), we will start chipping away at these tasks. We have some capabilities already in place, so it should only be a matter of implementing metrics, gathering baselines, then hooking up Travis to run it all as part of code review.
|Prototyping||1 week DONE|
|Beta Testing||4 weeks|
Two weeks from when 4.1.7 is submitted for App Store review, so approximately August 7th.