Page MenuHomePhabricator
Paste P9624

Standardizing QA best practices notes
ActivePublic

Authored by Etonkovidova on Nov 13 2019, 10:24 PM.
Tags
None
Referenced Files
F31064811: raw.txt
Nov 13 2019, 10:33 PM
F31064793: raw.txt
Nov 13 2019, 10:24 PM
Subscribers
None
Wikimedia Technical Conference
Atlanta, GA USA
November 12 - 15, 2019
Session Name / Topic
Standardizing QA best practices
Session Leader: Elena; Facilitator: Aubrey; Scribe: Brennen
https://phabricator.wikimedia.org/T234653
Session Attendees:
Niharika Kohli, Željko Filipin, Volker Eckl
Notes:
* Elena: QA engineer at the Foundation, working at WMF for 5 years. Every project and every team has different deadlines and different processes
* Exploratory testing: Opinions differ widely on exploratory testing - some think it's manual repition of checklists - boring and nobody wants to do it. Others think that exploratory testing is somewhat randomly going from page to page and clicking around. It's neither of those things.
* [2] The mystery of exploratory testing
* What needs to be tested?
* Functionality
* Langs
* Accessibility
* Performance
* Browsers/devices...
* [3] When QA support is needed
* Planning
* Design
* Implementation
* Deployment
* Maintenance
* Why is QA needed in planning?
* Volker: For the MVP to define what the most important things are
* E: Not enough to have UX designer involved?
* V: There's still uncharted territory in the product that a QA person has an easier time identifying. QA person comes more from a user perspective and less of a planner's and designer's view.
* E: And that user perspective would be different from UX designer because QA from previous experience has dealt with many bugs from user feedback. It's also beneficial for QA to evaluate the scope of future testing and risks involved in the product.
* Design
* Everything from above
* Mockups
* Ž: In previous life I did a lot of exploratory testing - the value of a tester in the design phase is that devs and PMs are focused on making things work, and testers will have experience in the way that things tend to break.
* E: How things in fit in the broader picture of features, etc. Software (especially at WMF) tends to grow organically, historically, and naturally. Things should fit. You can lose track of where you were (example: Kaiser Permanente).
* Implementation & Deployment
* E: Involvement of QA is less controversial here.
* Maintenance
* [4] E: When we have a total absence of QA support [tree swing cartoon, "How the customer explained it...", etc.]
* [5] What makes software good?
* [6] 3 principles
* Understand the underlying problem before attempting to solve it
* Things should be simple and intuitive
* Acknowledge that a user is not like you
* E: Important to have a broader perspective
* There will be an excercise in exploratory testing.
* [7/8] "Happy families are all alike; every unhappy family is unhappy in its own way"
* "Anna Karenina principle"
* Above but s/familes/software/g
* [9] How we communicate failures to users
* [10] Cheerfully informing users about new features
* [11] No matter what...
* Internal error
* [12] "Something went wrong"
* ...more slides...
* Discussion of maybe-googleable error messages and their relative utility
* These slides cover a spectrum of quality in communicating errors to users
* Ž: Do you have a collection of these?
* E: Yes I do.
* [Exploratory testing exercise]
* Special:RecentChanges ( https://en.wikipedia.beta.wmflabs.org/wiki/Special:RecentChanges )
* Any language wiki you're familiar with
* No need to log in
* Look at page from POV of exploratory testing. Especially look at functionality of features.
* Write 3 or 4 points of what should be tested.
* Try to be in the mindset of exploratory testing.
* Niharika: I found a bug.
* Scrollbars under Safari.
* E: What if something would cancel the whole implementation of the feature?
* Ž: Can you give us more guidance?
* E: I have 3 points I will present in 5 minutes.
* E: This is a typical situation - you're presented with redesigned filters here - what do you think we should test?
* Ž: I'm lacking context to be useful.
* E: So that's the first question - would you need documentation, specs? Very often we're lacking both. This is why it's important to participate in planning and design so you have context.
* [discussion of filter features]
* [discussion of RecentChanges use cases in finding vandals, etc.]
* Ž: [running brain dump on testing process]
* N: Different clients...
* E: Natural testing.
* V: I just found an error in highlighting.
* E: So we see here the enormous set of possibilities for testing.
* [slide] What needs to be tested...
* ORES filters - thresholds
* Filter combos - do we communicate sufficiently that some filter combinations don't make sense? Some filters shouldn't produce any results.
* Combination of ORES filters and logged actions
* [demo of this and resultant error message]
* The old filter functionality - all previous functionality is preserved?
* [slide] The mystery of exploratory testing
* [slide] Phabricator screenshot - workboard including Code Review, QA, Design Review, ... columns
* [slide] Task with specs - checkboxes for various components of design
* [slide] Daily tracking spreadsheet with tasks and needs / actions / followups
* [slide] Deployments calendar
* E: How is exploratory testing done in regards to deployments?
* [slide] Trains
* [slide] Changelog - auto-generated for each release
* [slide] Logstash / Kibana fatal monitor
* [slide] Phabricator bug triage
* Especially important to have regular bug triage meetings - highly structured feedback
* [slide] QA engineer needs
* provide helpful perspective
* be satisfied as an information provider, not a gatekeeper
* to adapt to an iterative methodology
* to function with a minimum of formal specs
* [slide] Context driven testing
* start from project's needs
* design to fit project
* some practices might be irrelevant or counterproductive
* test practices can't be independent of context
* takes needs of the specific project / situation into account
* context-driven-testing.com - "there're no best practices" - flexibility is key
* E: Last, what didn't I talk about?
* Ž: 2 comments
* Context-driven testing is a school of software testing - there are 4 or 5 of these, you would have gotten different stuff from different schools of thought
* Factory school, quality assurance school, analytical school - all have different ideas about software testing
* Keep in mind that this is just part of what software testers think
* Exploratory testing is just one of the tools - a big practice of the context-driven school
* N: Would also depend on the project
* Ž: Yeah, or the industry - NASA for example with very rigorous needs or something blows up
* V: Agile?
* V: Would be interested in perspective on tester needs - we have very little insights about how QAs are going about testing - would love to see accessibility testing part of QA
* [discussion of accessibility, utillity of human perspective in workflow]
* Ž: Usability testing is a specialization.
* E: We should think about it.
* E: 2 things:
* Should automation be part of exploratory testing?
* Instrumentation / tools
QUESTIONS
- Should UI automation testing be part of exploratory testing?
- What instrumentation /tools could be used in exploratory testing?
ACTION
Continue discussion in a broader context of Engineering productivity team
Start a discussion of a specialized testing - e.g. Accessibility - how to make it a part of QA testing?
Get feedback on what is missing in team processes regarding QA support