Exempla Docent - testing UI for Suggested edits module
Part 2

In Part 1 of Exempla Docent for QA practices, some approaches to testing ORES model articletopic were explored. This post, as Part 2, will present an overview on testing Suggested edits module (SE) - the UI that presents the ORES articletopic logic to users (more info on Newcomers tasks on Special:Homepage).

Note: Special:Homepage with the SE module is enabled by default for new accounts on participating wikis, for example, testwiki, cswiki, svwiki, or arwiki. On betacluster Special:Homepage is enabled not only on the mentioned wikis but also on enwiki. For existing user accounts Special:Homepage can be enabled via Special:Preferences option Newcomer homepage in User profile tab settings.

From a practical point of view, there are two huge venues of testing:

  • checking if all UX design specs are in place (including testing user workflows)
  • checking the essentials - cross-browser testing, mobile testing, translation, accessibility

... and make sure that everything makes sense!

The scrutiny of exploratory QA testing starts with understanding of the problems that we need to solve as well as with the analysis of whether the implementation delivers the solution. What are the Suggested edits? The Suggested edits module provides guidance and support to newcomers from selecting a task (an article for edit) to publishing the edits. A newcomer workflow would include the following steps:

Intro/selection of topics and difficulty levelsSuggested editsEdit Suggested editsPost-edit dialog

Well, it's a lot to test. To illustrate QA analysis process, let's get the simplest example - I'll use the first screen of the intro tour for the SE module. This is what newcomers would see when they come to Special:Homepage - the intro tour gives newcomers some general information along with options to select topics of interests.

Overall viewQA analysis view

QA analysis view includes both testing venues mentioned above - checking UX design specs and checking the essentials, thus providing a sort of the mental map that helps to prioritize and structure the testing.

Let's proceed to the next level. There are quite a few ways for the user to interact, and all of them should be tested, of course. For example, testing the selection of the SE topics gives a whole cluster of possible tests illustrated in a table below:

SelectionCases that should be handled
one topicno articles exist for this topic; check for topic labels translation - too long/too short
two topics from different categoriesthe very first one and the very last one; one topic that does not have articles
all topics in one categorycheck for possible performance issues
all topicscheck for possible performance issues

Plus, there is a counter! The counter shows how many articles match users' selection. What if an article fits in two topics? Would it be counted twice? If it is not counted twice, would it be confusing for a user to see that the number of articles for several topics is lower than the number of articles for one topic?

Such questions are, in fact, the essence of the exploratory testing- "Unchallenged assumptions are dangerous". And the best QA practice is to remember that. Happy testing, everyone.

Written by Etonkovidova on Nov 30 2020, 4:31 AM.

Event Timeline

Interesting article. Where does the quote "Unchallenged assumptions are dangerous" come from? Is it widely used? Search engines mostly find it in slides for schools of software testing talk.

Interesting article. Where does the quote "Unchallenged assumptions are dangerous" come from? Is it widely used? Search engines mostly find it in slides for schools of software testing talk.

I've added the source for the quote - it's from Contect-Driven School presentation.