Testing strategy:
- Collect 200K revision ids - from random API (optionally you can also mix this with some from eventstream) (wiki, ns 0). Ref risk : avoid these for now - de, it, pl, bn, ja Ref need: use ['fa', 'it', 'zh', 'ru','pt', 'es', 'ja', 'de', 'fr', 'en'] for now Sample set: random API (no redirects) - 5 to 10 latest rev id for each random article
- Send all those request within an hour (use wme dev token) (Use go channel and go-routines to do send out all request in an hour)
- Record number of revisions used from each project.
- Record latency for all responses with status code 200
- Record error for all responses with status code other than 200
- Record percentile distribution of latency. Also, min, median
- Record % of requests returned within 500 ms; between 500 ms to 1 s; between 1s - 2 s, between 2 sec - 5 secs; above 5 sec
- Do a small testing locally; add gitlab CI pipeline on the repo; deploy to dev (ref risk/ ref need); test on dev.
- You can dump out the result in s3 or log it