Page MenuHomePhabricator

Identify our browser test coverage
Closed, DuplicatePublic

Description

The goal of this task is to identify which parts of our codebase and features of the site are covered by browser tests. We'll use the information to make a decision on whether to create more tests or remove obsolete ones.

A/C

  • Create a user facing feature matrix of MF or update the one we already have: [insert link here, I cannot find it]
  • Create an efficient method of mapping browser tests to features. This will be useful to identify coverage this time and in the future if we decide to do such periodic tests.
  • Match features with browser tests. The outcome is some kind of a table where on the X axis we have features, and on the Y axis we have browser tests.

Event Timeline

Restricted Application added a subscriber: Aklapper. · View Herald TranscriptOct 24 2016, 2:50 PM
bmansurov added a subscriber: ovasileva.

@ovasileva I nominate this for sprint +1.

ovasileva triaged this task as Medium priority.Nov 2 2016, 7:37 PM
ovasileva moved this task from Triaged but Future to Upcoming on the Readers-Web-Backlog board.

Is this a spike? Needs timebox.

I'm not sure I understand what is expected here. How do we determine browser test coverage? A feature e.g. language can have browser tests but it might not cover (sometimes for good reason) part of that feature.

Is this expected to be a one-off activity or is the outcome something we expect to maintain over a longer period of time?
How would we keep such a thing up to date? Where does such a document live?

Certain features such as Nearby don't have browser tests as the browser test infrastructure doesn't support them. How are we hoping to document these?

Is this a spike? Needs timebox.

Sure this can be a spike. We can agree on the timebox as a group.

I'm not sure I understand what is expected here. How do we determine browser test coverage? A feature e.g. language can have browser tests but it might not cover (sometimes for good reason) part of that feature.

I think we can enumerate the things the language overlay feature is supposed to provide and check whether they are covered by browser tests. If not, we'd need to find out why not.

Is this expected to be a one-off activity or is the outcome something we expect to maintain over a longer period of time?

It's a start. I think the goal is to keep a living document of features and their coverage in browser tests.

How would we keep such a thing up to date? Where does such a document live?

The document would live on a wiki page and be linked to from the README inside the browser tests folder. Since adding a new feature takes effort from multiple parties including design, PO, etc., this change will be trivial compared to the work we do to creating the feature itself. So updating this document will be one of the requirements of creating or removing a feature.

Certain features such as Nearby don't have browser tests as the browser test infrastructure doesn't support them. How are we hoping to document these?

We'd give the reasoning for why a feature may not be covered by browsers test in the document on wiki. Since we don't have this document yet, we cannot see the big picture of what is possible by the infrastructure and what is not. The document in turn would make us push for improving our infrastructure (although that's not the main goal here).

Another way we could handle this is actually write the browser test scenario but add a skip tag so it is not executed. Thoughts?

That's also a good idea.

phuedx added a subscriber: phuedx.Apr 13 2017, 4:53 AM

I'm moving this to Triaged but Future as it's been in Sprint +1 for 5 months.

After looking at this again, I'm not sure I understand how browser test coverage can be evaluated.
Let's take the search overlay as an example. We have browser tests for:

  • checking you can open and close it
  • clicking search results
  • performing a search
  • using the full text search

How can we evaluate whether this is covering all scenarios? I could imagine all sorts of scenarios for this...

For instance it doesn't cover unicode searches.
It doesn't cover RTL searches
It doesn't cover zero search results.
It doesn't cover searches from the Talk namespace...
etc etc.. (hopefully you get my idea.. I can dream all sorts of scenarios for the search feature)

Something like unit tests seem very tangible - you can count all the code paths measured but I'm not sure what this looks like for browser tests.

Open questions

  • How do we define a feature?
  • How do we evaluate coverage?
Background:
   Given I am using the mobile site
     And I am in beta mode
     And the page "Selenium search test" exists
     And I am on the "Main Page" page
     And I am viewing the site in mobile mode
     And I click the search icon
     And I see the search overlay

 Scenario: Closing search (overlay button)
   When I click the search overlay close button
   Then I should not see the search overlay

 Scenario: Closing search (browser button)
   When I click the browser back button
   Then I should not see the search overlay

 @smoke @integration
 Scenario: Search for partial text
   When I type into search box "Selenium search tes"
   Then search results should contain "Selenium search test"

 Scenario: Search with search in pages button
   When I type into search box "Test is used by Selenium web driver"
     And I see the search in pages button
     And I click the search in pages button
   Then I should see a list of search results

 Scenario: Search with enter key
   When I type into search box "Test is used by Selenium web driver"
     And I press the enter key
   Then I should see a list of search results

 Scenario: Going back to the previous page
   When I type into search box "Selenium search tes"
   When I click a search result
   When I click the browser back button
   Then I should not see '#/search' in URL

 @integration
 Scenario: Search doesn't break after one search result
   When I type into search box "Selenium search tes"
   And I click a search result
   And the text of the first heading should be "Selenium search test"
   And I click the search icon
   And I type into search box "Main Page"
   Then search results should contain "Main Page"

How do we define a feature?
How do we evaluate coverage?

In general I'd only care about the UX in browser tests. I wouldn't worry about searching specific namespaces or unicode characters. Those feature can be unit-tested. However, it's important to verify that the UX works for both LTR and RTL. So, for me a feature can be "clicking on a search result takes the user to a page" or "clicking on full-text-search takes the user to the search page". Once we identify these features we look at browser tests and see whether they cover those features.

I understand that this is not something set in stone and we could argue either way, but I hope that we identify major user workflows and cover them in browser tests.

With regards to RTL/LTR - this could be solved by running 2 browser test jobs on different domains so let's think about that separately (I also don't think browser tests will help us much there... it wouldn't have done so with T163059)
There is a trade off to writing more browser tests... the browser tests get slower.
So I want to make sure we are all on the same page before taking this task on.

To get started on this task it would, however, be useful to be able to have a common definition of "should this be covered in a browser test"
A litmus test for this might be: "Does this new browser test reveal something new about how I can interact with the interface?"
e.g. To take the search overlay as an example.
If I click a search result and get taken to a page that's new, but if I click the second result that's the same workflow as the first.
If I type characters into the search box it changes the search results but I don't care about what happens when I type in more characters (unless typing 50 characters makes Nyan cat appear for example)

"Does this new browser test reveal something new about how I can interact with the interface?" might be a little broad... I'm not sure. What do you think?

Just wanted to +1 the general idea that (browser-based) acceptance tests are very high level – by construction as they test via the outermost boundary of the system, the user interface – and, generally speaking, only test the happy path. Since they take so long to run, I'm always wary of adding more, and will always ask whether they can be replaced by integration or even unit tests.

As regards LTR/RTL support, we might consider randomising the interface language at the page object level so that it's hidden away from the tests themselves but might be enough to catch an error every now and again. This would mean less duplication at the risk of a seemingly random error but I'd argue that the risk is small. There should also be a mechanism to set the interface language, should a test target a particular language direction.

pmiazga added a subscriber: pmiazga.May 9 2017, 3:34 PM

@Jdlrobson - should this still be in needs analysis? It seems the conversation here has stalled a bit - perhaps it may require a separate sync

Jdlrobson lowered the priority of this task from Medium to Low.Jun 9 2017, 8:02 PM

Yep we need to talk about this, but right now bigger fish to fry! We'll get there...! :)

bmansurov removed bmansurov as the assignee of this task.Jul 5 2017, 7:55 PM