Session
- Track: Testing
- Topic: System level testing: patterns and antipatterns with Selenium
- Scheduled for Day 1 - Tuesday November 12th 14:45-15:30 (45 minutes)
Description
Testing at the system level is fraught with unforeseen issues. This session is an overview of the current system level testing framework with a focus on anti-patterns and patterns to follow.
Questions to answer and discuss
Question: Should a repository have as little as possible or as much as possible Selenium tests?
Significance: Not enough tests means critical bugs might be missed. Many tests require a lot of maintenance.
Question: Is page object pattern useful?
Significance: It's another layer of complexity. Does it have a positive impact, or just complicates things.
Question: Should the API be used to speed up tests and make them more robust?
Significance: It's both faster and more robust.
Related Issues
Pre-reading for all Participants
Session Leader(s)
Session Scribes
- @brennen
- [name]
Session Facilitator
Session Style / Format
- Discussion and code review.
Notes
- Full list of examples that @zeljkofilipin reviewed: T234635#5658213
Timo: Raise your hand if you have not written a Node/Selenium test? (3 no, 3 yes) Today we'll cover a couple of examples of tests are written, how they work, what makes a good/bad one. Best practices and gotchas. Session will be in two parts. Some slides, and then looking at some real code to discuss as example. Zeljko: Story - I was on a plane, reviewed all Selenium code ever written for MW. I was looking for good and bad examples, and found three such examples, plus a strange one! (Slide quote): "All happy families are alike; each unhappy family is unhappy in its own way" Zeljko: From Tolstoy, replace "family" with "tests". 1. A good test is as small as possible. (while still providing value) Kaldari: Polarity, small as possible whilst also providing as much value as possible. Could write a profusion of small tests, or a few high value tests. Z: If more complexity provides value, then do it. T: A small test is often good, a big test can sometimes be bad - with the exception that small tests duplicate too much, then that violates the rule. Kosta: Helppanel uses JavaScript widget to [?] which shows up in RecentChanges. The thing that Selenium can provide (that others can't) is the full user-interaction of seeing it appear on the page. IIUC, you're advocating for the opposite, with many small discrete parts are tested? Z: Not necessarily. T: (How small is small?) Z: Just opening a page and checking for an element might be too small for a selenium test. trivial level. T: If the test is really small, it might be something that shouldn't an integration test - maybe should just be a unit test. Value could be gotten from a unit test. Cormac: hypothesis: if each test incorporates the previous test, then your test is too small. 2. A good test is as simple as possible. Z: it is more important for tests to be simple than DRY. (don't repeat yourself) XP/Clean code principle. T: Don't have an abstract utuility function that you [???] It might be more beneficial to have a test be independant and easy to read, if the alternative just saves a few characters. 3. A good test tests as little as possible - when it breaks, it should be easy to find out why Z: ties into being small and simple. For complex tests, when it breaks, any of the many things could be buggy. James F: I think the narrowness of the scope is relative to the scale of the type of the test. Unit tests are small, integration tests are potentially very big. Login. Z: In this case, small means relative to the entire scale of everything. 4. A good test uses API for setup - as much as possible. It's both faster and more robust. Z: Use whatever helpers you can, setting up rows in databses via the APIs can be faster and more reliable. Z: An abuse of this rule would be creating an account with the API when you're trying to test whether the signup form works. T: What also ties into this, in earlier version of our current node.js stack, we exposed a page object as something extension could call into. Which meant page object could be used by an extension, which would then instantiate that page to create a user. If your UI is tightly coupled to assuming the user has just signed-up, that's legit and could be hardcoded in your extension test. But more commonly, the test will likely not need that. E.g. Recentchanges tests, should not create all the edits to be displayed there as the current viewer, but use the API instead. Kosta: Related to this - this ties into the theme of could we use a maintenance script in core to do some of this setup stuff? Maybe useful also for local development. Might be faster than using Node.js bot or API... Z: Whatever makes sense. You should speed up the setup as much as you can, but ??? 5. A good test uses page objects - exceptions are possible but not probable. Z: But I've yet to see an exception that is justiified. Z: Disadvantage of page object is that there's yet another layer of abstraction - but it makes the test code simpler. Test code can read like user actions. Complexity is hidden away in page objects. The thing that changes the most is the UI. Z: Anybody not like page objects? T: How does page objects tie in with the earlier point of avoiding DRY code in tests ("don't repeat yourself"). Z: There's one repo here that has one simple test - it's fine. Do it as simple as possible. T: multiple features files that cover the same page, abstract them into a seperate file? Z: If two tests use the same element... Florian: beneficial to have a page-object because it abstracts the technical details, and explains the intent. I don't want to "clicks div 123", i want to "clicks sign-up button". This becomes explicit by using a function with such name. Makes tests more readable. T: Is there an intermediary between a page object and inline selectors? Z: not really, except "just a function in a file". The advantage of page objects is, that its where things are located and easily findable. 6. A good test doesn't depend on others - test suite should pass when tests are running in random order or parallel Z: if anything works when you run serially, but fails when you runin parallel, having independant tests makes it easier to debug failures e.g. testing framework changs the way that you run, from alphabeticaly to other. 7. A good test sets up it's environment - Users, pages, files Z: Your test can't assume things are there. It does not depend on the environment being in a certain state. Z: E.g. a branching condition in a test file is unexpected. Z: Especially branching on conditions based on that state of the environment. Bad. 8. A good test is at a lower level. - What could be tested at a lower level should not be tested at a higher level Z: There should be as few high level tests as possible. If you have more than a handful, you might be following an antipattern Z: If you can do it in a unit test, etc., then do so. Z: This list is incomplete. Please help me complete it. If you've got ideas, please share them. 9. "Never trust a test you did not see fail." - (quote from unknown source) https://phabricator.wikimedia.org/T234635 description contains post-event action items: Z: Software testing antipatterns blog by Kostis Kapelonis http://blog.codepipes.com/testing/software-testing-antipatterns.html Z: Pyramid / ice cream cone https://watirmelon.blog/testing-pyramids/ Z: Selenium tests don't require you to change your code, unlike unit and integration tests. Don't do that. T: Selenium is the path of least resistance if your code isn't in a very good state. Z: Might be a good transition state. e.g. "My code is spaghetti and i need a safety net", a good temporary solution is many selenium tests. T: Search suggestions system can give multiple items from different namespace, alphabetizations, etc. All of that is unit testable. User tests should cover "did everything interact that should have". Z: started with ??? Elena: Another reason for having so much Selenium tests could be that manual testing (repetitive tasks) can be automated that way without changing a lot in unit tests/integration test layers (as an intermediate step) Z: didn't even want to test Manual Regression tests here! but yes, similar use-cases. Here https://www.mediawiki.org/wiki/Selenium has an examples section, that has an uptodate list of all repos that have Selenium tests. You could use this to find examples. Looking at real code (Zeljko projecting): Z: A few repos moved from Ruby and cucumber to Node and mocha - they're just out of scope for today Z: Please reach our to me for pairing if you'd like me to look at other repos. Good example (MediaWiki core, page.js) Z: beautiful new way to use the API. Before hook runs each test suite. use Util.getTestString. Sometimes has had problems with encoding (UTF etc) Kosta: Main frustration with S - breaks not because something changed in core, but because the assertion runs before that first line is finished basically. How can we promote better practices around that? My first reaction when something fails is to think it's a flaky test not that something has changed. Florian: is that about reloading a page? if there's JS involved, might need to add "waitfor" patterns. Similar in angular. Bad example (TemplateWizard) Z: scored all the antipattern points! L28-L98 "basic use of templatewizard". It uses node selectors, 1 huge file, all details inside the test instead of using page objects. Positively, it waits for elements to be present. Strange example (MobileFrontend) Z: Tried to reuse existing cucumber test suite. It worked for some definition of "worked", but the resulting test code is strange. Slightly too complicated. Assertions should be exclusively at the end. Not horrible, but strange. T: Function names used here are stringifications of function files meant to relate to former cucumber tests Z: Do whatever you need, but don't count on it being your endgame.
Originally from https://etherpad.wikimedia.org/p/WMTC19-T234635
Post-event action items:
- recommended reading for all participants
- actions for @zeljkofilipin
- review meeting notes
- create a wiki page with recommended test architecture
- submit cleanup patches for various repositories
- reach out to find out why some teams did not find Selenium documentation, or if they found it, it wasn't useful
- reach out to teams offering code review and pairing on refactoring