Definition Progress
The following is a checklist for completion of the definition of the Epic. Make sure to check these off as you complete each item.
- Summary
- Rationale
- Success Metrics
- External Dependencies
- Unknowns
- Product Plan
- Prototyping
- MVP
- User stories
- User Story Phab Tickets
- Metrics Implementation
- Metrics Phab Tickets
- Estimates
- Delivery Date
Summary
As a Wikipedia iOS project contributor, my changes are automatically deployed to testers after being merged.
Goal Visibility
Internal readership goal (see Q1 themes)
Rationale
There have been a number of incidences shaking developers' confidence in the stability of the codebase, especially the 4.1.0 & 4.1.6 releases, but also minor things like basic assumptions about legacy code behavior. We hope setting up continuous integration will give developers immediate feedback on and incentive to write unit tests.
Feedback on changes which includes test results, test coverage, and code quality metrics should give a clear signal on the impact of those changes, as well as an overall quality trend for the codebase. This should help developers manage and gradually improve reliability while amortizing tech debt.
Metrics
- T105351 Test coverage: 18.06%
- Number of unit tests: 189
- T105351: Gather metrics as part of testing/CI Done via codecov
- Developer confidence take a baseline survey, and/or use notes from previous retros (coordinate w/ max binder to integrate into health check on July 29th)
Abandoned code quality since OCLint is not cooperating with our project at the moment. Might revisit later.
- [ ] T106418: Code quality (see cyclomatic complexity, function length, etc. see OCLint for details)
Acceptance Criteria
Given I am working on the Wikipedia iOS repo
When I submit or update my changes for code review
Then CI should run a job configured by iOS engineers
And the job should lint & test the changes
And when the job finishes, I should post results back to the patch (+/- 1 CR and -1/+2 Verified)
Difficult Metrics
Metrics we'd like to know about, but aren't sure how to measure.
- Code review duration
- Don't know how to get baselines for this (from Gerrit, ssh API?), more familiar w/ GH API (might even already be available)
- Defect rate
- Not only is this hard for us to measure, but our ability to find & report bugs is a bit lacking at the moment—so a positive trend might actually be a good thing
- Community engagement
- Mainly, we'd like to know if it went down as a failure metric, but like CR duration, could be hard to get baselines
External Dependencies
Release engineering (Gerrit, Zuul) or Travis CI
Unknowns
External dependency reliability (i.e. Travis and Coveralls/Codecov uptime).
Product Plan
Following our discussions at the Lyon Hackathon (T98974) and based on Readership's Q1 theme to "Improve developers' ability to develop features quickly and reliably to serve readers across desktop, mobile web and apps" we're going to get our CI infrastructure to an "MVP" state. GitHub and Travis were chosen as a pragmatic way to make these improvements while minimizing the load on the iOS and other teams.
Prototyping
Prototyping Travis by setting it up on a fork of GH repo. If that goes well, move main dev workflow to GitHub and setup Travis there.
@BGerstle-WMF set up a fork of the Wikipedia iOS GitHub repo to work with Travis CI. Results were encouraging:
https://travis-ci.org/bgerstle/apps-ios-wikipedia
MVP
(See blocking tasks)
User Stories
(See blocking tasks and their blocking tasks)
Timeline Estimate
Once the 4.1.7 update is released (T106106), we will start chipping away at these tasks. We have some capabilities already in place, so it should only be a matter of implementing metrics, gathering baselines, then hooking up Travis to run it all as part of code review.
Task | Estimate |
---|---|
Prototyping | 1 week DONE |
User Testing | n/a |
Mockups | n/a |
Development | 2 weeks |
Beta Testing | 4 weeks |
Delivery Estimate
Two weeks from when 4.1.7 is submitted for App Store review, so approximately August 7th.