= Session=
* Track: Testing
* Topic: API Integration Testing
=Description=
[description of the session - TBD]
Slides https://docs.google.com/presentation/d/19h6C9XjgU-f0_rnRYntBXp22YpBueCbUwFG8aPHPyQc/edit#slide=id.g6afda2d481_0_5
=Questions to answer and discuss=
**Question:**
**Significance:**
**Question:**
**Significance:**
= Related Issues =
* ...
* ...
=Pre-reading for all Participants=
* [add links here]
----
=Notes document(s)=
Wikimedia Technical Conference\
Atlanta, GA USA\
November 12 - 15, 2019\
\
**Session Name / Topic**\
End-to-End Integration Testing using APIs\
Session Leader: Daniel + Kosta; Facilitator: Greg; Scribe: Nick\
<https://phabricator.wikimedia.org/T234636>\
\
**Session Attendees**\
Antoine, TheDJ, Elena, Sam, Maté, joaquin, brooke, daniel, kosta\
\
**Notes: (pre-existing notes are copied from the slides)**\
- This is about writing end to end tests *using* APIs
- Why do we do this? tested code can be changed without fear. We want
to change alot of things around. When refactoring code, we want to
make sure we don\'t break anything for the users, so we test the
things the users do.
- tested code can be changed without fear
- Testing all the way down
- At the base you have Unit, then integration, at the top
end-to-end tests
- Functions, stories, and scenarios
- typically end to end means user stories, selenium
- Testing via the API instead of via the UI directly
- Abstract from the UI/DOM for stability
- The Black Box
- The less we know about the implementation, the more stable the
test is, and the closer it is to the actual client's experience.
Language agnostic. You just want the workflow to work. Testing
over the API, you pay for that with performance, because
everything goes over the network, no shortcuts.
- Testing the UI is brittle
- While we want to know that certain buttons exist and do the
right thing, we want to also
- a\) Test the APIs
- b\) Test complex flows independently of the UI
- The thing we actually want to test is that the application logic
works.
- This cuts out the user experience, but tests the application
logic intent.
- test all the things!
- we initially wrote things for the action API (api.php),
rest.php, RESTbase, and recently Kask, etc.
- for MW of course we created some helpers, tests more
idiomatic, dealing with things like secret key to run jobs
and stuff like that.
- test code written in php, JS, golang, etc
- test individual modules and parameters, but also full user
stories, and complex scenarios.
- Mocha & Chai
- We\'re using Node's Mocha & Chai frameworks. Others are
possible, but this runs on node.js so easy to install and run.
Large community and large number of plugins.
- Even for a PHP developer, it makes sense within a day or 2.
- Async, flexible, straight forward, common
- Does not care about implementation language
- Slower than doing it in PHP, but
- stays stable when backend gets re-implemented (relevant
mainly for REST endpoints)
- easier to run in parallel
- avoids baggage of existing phpunit tests
- Can do cross-wiki scenarios, standalone services
- Fixtures
- To make it quicker and less painful, can use fixttures.
- Tool accounts, 2 users (1 new user, 1 regular) 1 admin, 1
superadmin (bureaucrat)
- Fixtures are useful: familiar, available, quick.
- Fixtures are often brittle! Might leak data into another test.
Your tests should NOT change preferences or permissions of the
test users. If you want to do that, make a new user.
- SW: is this in core and being run?
- - not yet but soon.
- also for extensions if the extension exposes API
modules
- The extension would put tests into a directory, gets
run by CI
- Await, don't Delay
- async / await
- deferred updates
- jobs
- replication
- Could create the 3 tests that you need and run in parallel;
uses promises (async/await). JS forces you to do that
because http is async operation, so needs to be written in
async aware way, hence everything starts \"await\", which is
annoying but you get used to it.
- Needed to make 2 operations sequential, to observe efects of
actions.
- MW\'s own model is inherently async. The interaction model
over API and UI, is eventually consistent. There\'s no
guarantee that you\'ll see an edit \*immediately\* after
someone makes it. (although the chronology protector makes
it so that a user sees their own edit.). It can take
several seconds for deferred updates and replication to
complete, which makes things difficult. If they go via the
job queue the delay can be even worse.
- Solutions: Test framework makes job queue run all pending
jobs, which slows things down, but makes sure all updates
are done. In practice also solves deferred
updates/replication, though not guaranteed.
- in order to do that you trigger all jobs via
Special:RunJobs, using secret key of your installation. Was
\'fun\' setting this up in CI, but for local setups, is
basically copying a string between 2 config files.
- \[code slide\]
- This code example is listing backlinks, special:whatlinkshere.
- We\'ve got a \"before\" block
- bob is a fixture.
- a line that calls the api
- some shortcuts here, for plain actions, list queries, property
queries
- lots of lines starting with await.
- Let's do this!
- Groups of 3 or 4
- Think of a feature or bug in MW core or extension
- Think of a workflow ("user story") to test it
- Write down API calls and response assertions.
- No, really, let's do this!
- Clone mediawiki/tools/api-testing
- Follow setup instructions
- Write tests
- Run tests
- Git review
Groups work on tests\
- TheDJ:
- 1st eg
- we looked at proofreadpage
- need to run an API query, where property is prfoffread
- do the setup of that page,
- quality check to see it\'s not set initially
- need a user ith rights to make an edit to the page, to make
a change that changes the quality of the page
- QUESTION: We wondered whether to use the existing admin
user, or create a new user and give them right (answer: use
admin fixture if it has the right per default, otherwise
create a user and grant right).
- 2nd eg.
- timed media handler
- vidided into derivatves,
- need configuration for those derivatives, incl names of keys
- ned new user with right to reset transcodes
- make API call to get trnascodes keys for that video
- execeute API request to get all transcodes, compare
- 3rd eg
- watchlist expiry
- user to watch
- small expiry time,
- API Get to see if the page is on the watchlist
- API check a few seconds later to see if it has disappeared
properly
- would be nice if we could force the clock!
- QUESTION: either via API or cron job?
- if you do an edit in the test runner it will do all jobs
until there\'s nothing left.
- which can take hours for video tests..
- Joaquin: test frameowk will run on all jobs after a test
edit is diferent from live behaviour, but necessary to
make the effects observable. Just don\'t use the
convenience function.
- kosta: A way to test this without jobs is have a
different API do watchlist expiry purge, so not testing
the job queue expiration
- DK: force a clock into the header or something like
that?
- Joaquin:
- protect page
- protect pages 1, 2, 3, 4
- timing causes MYSQL related problems (gap lock)
- releted rev diff tests
- list close-by items missing images. Add image, check that
item is no longer listed.
- wikibase - do an edit to a WB property used in an infobox,
make sure it\'s updated in WP.
- needs WB change dispatcher run.
- Pageimage - test images that don\'t conform to criteria,
adding an image to the page and cheking the API returned the
proper page image.
- needs file upload support
- QUESTIONS
- is it possible to do file uploads just from API? Yes,
but complicated. Needs convenience function.
- maintenance scripts to upgrade the wiki
- DK: Acceptance tests would be run against live site.
Framework doesn\'t care, but some things cannot then be one.
E.g. don\'t want to mess with real content. E.g. also might
not have access to the secret key, or to root users.
- Will be interesting to see what envs we end up with,
like beta cluster or test.wikipedia.org.
- DK: Homework!
- we\'ve come up with nice scenarios for testing
- please check out the repo, mediawiki/tools/api-testing
- write tests and submit to gerrit
- SW: for tests in extension directopries, how will node
find the right modules?
- DK: Not sure what the best way is. Write mocha
config. Can be automated, see
Extension:GenerateMochaConfig.
- KH: Should we tag you as the reviewer?
- Yes, I\'ll change it so that it does that
automatically
\
\
\
\
\
\
\
\
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\--\
\
General instructions\
This sheet is for scribes and participants to capture the general
discussion of sessions. \
\
1. Note-taker should go for a more 'pure transcription' mode of
documentation
1. Don't try to distill a summary of the core details unless
you\'re confident in your speed.
2. All note-takers should try to work in pairs and pass the lead-role
back-and-forth when each speaker changes.. Help to fill in the gaps
the other note taker might miss.
1. When you are the active note-taker in a pair, please write
\"???\" when you missed something in the notes document.
2. If a session only has one note taker present, feel free to tag a
session participant to help take notes and fill in the gaps.
3. In your notes, please try to highlight the important points (that
are usually unspoken):
1. INFO
2. ACTION
3. QUESTION
4. It's also good to remind session leaders, facilitators and
participants to call out these important points, to aid in note
taking.
5. Sessions might have activities that will result in drawings,
diagrams, clustered post it notes, etc
1. Please tag a session participant to capture these items with a
photo and add them to the Phabricator ticket.
6. Some sessions might have breakout groups which means that there will
be simultaneous discussions.
1. Session leaders should direct each group to appoint a scribe to
take notes (in this document).
7. At the end of each day, notes and action items will need to be added
into the related Phabricator ticket (workboard:
<https://phabricator.wikimedia.org/project/board/4276/> ) for each
session
1. This can be done by any and all conference attendees.
8. Additional information about note taking and session facilitation:
<https://www.mediawiki.org/wiki/Wikimedia_Technical_Conference/2019/NotesandFacilitation>
=Notes and Facilitation guidance=
https://www.mediawiki.org/wiki/Wikimedia_Technical_Conference/2019/NotesandFacilitation
----
=Session Leader(s)=
* @daniel
* @kostajh
=Session Scribes=
* @Quiddity
* [name]
=Session Facilitator=
* @Bstorm
=Session Style / Format=
* [what type of format will this session be?]
----
**Session Leaders** please:
[] Add more details to this task description.
[] Coordinate any pre-event discussions (here on Phab, IRC, email, hangout, etc).
[] Outline the plan for discussing this topic at the event.
[] Optionally, include what this session will //not// try to solve.
[] Update this task with summaries of any pre-event discussions.
[] Include ways for people not attending to be involved in discussions before the event and afterwards.
----
Post-event summary:
* ...
Post-event action items:
* ...