Section Translation uses a new vuejs-based architecture that may bring new opportunities (and maybe also limitations) to our testing approach.
This ticket is about investigating how to best apply QA practices to this context, now that initial versions of the development is starting o become visible at the testing server.
Description
Related Objects
Event Timeline
I don't perceive any difficulties when it comes to testing manually, the process should be the same as CX:
- devs write the code
- code is merged and deployed to cx2-testing
- QA tests
- code is deployed in prod
- QA tests in prod
One thing I'd like to understand though, how will the user find SX?
- is it based on user agent? CX for Desktop, SX for Mobile
- will it be a user choice? CX for full article translation, SX for section translation only
- will it totally replace CX?
When it comes to devices for testing, I have access to browserstack.com and crossbrowsertesting.com but would like at least one physical device for Android and iOS.
I have an Android phone but don't have an iOS one.
That part of the process looks good and I think we should keep doing it. What I'm concerned about is:
- Regressions. Changes by our team, other teams, browsers or operating systems may result in the alteration of aspects that were previously tested when first developed but later become broken. We should not rely on user reports to discover that suddenly publishing does not work or one menu became unreachable in the screen.
- Issues that are hard to reproduce. When users report a given issue, it is often hard to recreate the state.
Since we are using a more modern architecture, I was interested in the team to check what we can do to make the QA process better. Which are the techniques for test automation and for facilitating manual tests that we can apply? Which are the blockers for applying some interesting ones?
Note that since we are on active development in this front, this may be a good time to implement any aspect that can facilitate testing. So it may be a good time for you to reflect on the tickets that went through the "Needs input for QA" to identify which kind of input you needed often and how that can be provided in a more easy way (preferably without depending on another human's availability). For example, a tool to inspect existing translations that shows X, W, Z information.
One thing I'd like to understand though, how will the user find SX?
- is it based on user agent? CX for Desktop, SX for Mobile
Section translation has different parts:
- Core workflow. The steps to select which section you want to translate. Those are implemented in a responsive way. That is, the same code goes to all devices, and depending on the screen size, the layout may be different.
- Editor. For creating the translation of the selected section, an editor is needed. On desktop the idea is to reuse the Content Translation editor (but loading only one section). On mobile, we are building a new mobile editor. Access to the mobile editor will be based on device type (I guess that means "user agent") in the same way that people get to the mobile version of Wikipedia.
- will it be a user choice? CX for full article translation, SX for section translation only
In general, yes. From the translation dashboard users can create a new article or expand an existing one. Expanding an article will be done using Section Translation.
Note that since Content Translation is not available on mobile. There, section translation will be also used for creating new articles (one section at a time).
- will it totally replace CX?
No. Section translation is a set of new features of Content Translation. It will extend the tool with a better workflow to expand articles and an editor for mobile. The whole tool for the user will be perceived as Content Translation. The Translation Dashboard will be also refactored to include specific entry points for section translation and to update the technology stack.
Hope this is useful. Feel free to ask more questions if anything else needs clarification.
Regressions. Changes by our team, other teams, browsers or operating systems may result in the alteration of aspects that were previously tested when first developed but later become broken. We should not rely on user reports to discover that suddenly publishing does not work or one menu became unreachable in the screen.
The only way to cover this is with automated tests, which we already know we have difficulties with due to our current CI system.
I have access to browserstack.com and crossbrowsertesting.com to test on a multitude of browsers but I won't be doing manual regression tests every day.
We will always have issues in production due to the nature of our architecture.
Also, we can't/shouldn't run automated tests in production and our cx2/local environment is very different from production.
We will see what we can do on T259604.
Note that since we are on active development in this front, this may be a good time to implement any aspect that can facilitate testing. So it may be a good time for you to reflect on the tickets that went through the "Needs input for QA" to identify which kind of input you needed often and how that can be provided in a more easy way (preferably without depending on another human's availability). For example, a tool to inspect existing translations that shows X, W, Z information.
Can't think of anything apart from what we already have.
Hope this is useful. Feel free to ask more questions if anything else needs clarification.
It is! Thank you!
Perfect. Then it seems that the main focus of work in this front will be covered in T259604: Test automation for Section Translation: investigate integration in CI infrastructure
Thanks for all the details!