Page MenuHomePhabricator

Design a Test-Driven Development (TDD) survey
Open, LowPublic


In a recent thread, @Jdlrobson suggested we do a survey to further explore the waning maintenance of browser-test suites among teams.

would it make sense to do a survey as you did with Vagrant to understand how our developers think of these? Such as who owns them... who is responsible for a test failing... who writes them... who doesn't understand them.. why they don't understand them etc...?

Some other questions I can think of:

  • How valuable are the current infrastructure of unit tests to the health/quality of a software project?
    • Please explain your answer
    • What would make them more useful?
  • How valuable are the current infrastructure of browser tests to the health/quality of a software project?
    • Please explain your answer
    • What would make them more useful?
  • How much experience do you have with TDD?
  • Would you like more time to learn or practice TDD?
  • How often do you write tests when developing a new feature?
    • What kinds of test? (% of unit test vs. browser test)
  • How often do you write tests to verify a bugfix?
    • What kinds of test? (% of unit test vs. browser test)
  • When would you typically write a unit test?
    • Before implementation
    • After implementation
    • When stuff breaks
  • When would you typically write a browser test?
    • During conception
    • Before implementation
    • After implementation
    • When stuff breaks
  • What are the largest barriers to writing/running unit tests?
    • Test framework
    • Documentation/examples
    • Execution time
    • CI
    • Structure of my code
    • Structure of code I depend on
  • What are the largest barriers to writing/running browser tests?
    • Test framework
    • Written in Ruby a language I do not know
    • Documentation/examples
    • Execution time
    • CI
  • What jobs does Jenkins currently do for you on +2? e.g. run qunit tests, php unit tests, phpcs, jshint,
  • What jobs would you like Jenkins to do for you on +2?
  • What are the largest barriers to debugging test failure?
    • Test framework
    • Confusing errors/stack traces
    • Documentation/examples
    • Debugging tools
  • Who is responsible for debugging test failures?
    • Engineers responsible for extension / codebase
    • Product team owning extension / codebase
    • QA team
    • Everyone
  • Does your extension have browser tests?
  • Do you know where to find browser test jobs for the extension you own?
  • How do you know when a browser test fails?
    • E-mail
    • Someone opens a bug
    • Visit integration website
    • Don't know
    • Other
  • Rate the importance of these to your development practices
    • Having a manual tester
    • Voting Jenkins builds
    • Non-voting Jenkins builds
    • Gruntfile / Makefile
    • pre-commit and pre-review hooks (Please explain answers to this question)
  • How much do you trust a browser test failure to be an indication of a failure in your software?

Event Timeline

dduvall raised the priority of this task from to Needs Triage.
dduvall updated the task description. (Show Details)
dduvall added subscribers: dduvall, Jdlrobson.
Jdlrobson set Security to None.

I added more questions.

TDD in general is in not in need of surveying. That is a non-optional practice in our engineering department. It's a personal preference whether one literally writes tests before code within a single commit. But the following is mostly enforced:

  • New code needs tests.
  • Tests are written by the same developers.
  • Tests in need of updating are updated as part of the same commit that would otherwise break them.
  • Tests are passing at all times.

It is without doubt that a violation of this is considered an honest mistake and pointing it out will be interpreted as a friendly reminder. Researching this would still be interesting, but wouldn't help the current situation I think. Code coverage isn't 100% though, but the overal practice is established.

The attitude towards browser tests however, is distanced from this. I don't have any data at the moment, but I would expect that having tests written externally - results in tests not covering the feature in an appropriate or desired manner (e.g. hardcoding details, asserting behaviour incorrectly, a partial reflection of some state).

A few years ago I was originally approached to implement this. I would start work on this after finalising JavaScript unit testing with TestSwarm (which works great for jQuery Foundation, but didn't for us, we found another way though). Anyway, the project got too big and I re-assigned to VisualEditor and various MediaWiki features.

I'm glad it didn't end there and we now have a QA team!

Nonetheless, I do have a few ideas for how I would've approached this. Its difference from the current stack may serve as source of inspiration and/or as possible cause for alienation.

The following seem like natural attractions to me. I believe most of this is done already (in no small part thanks to Chris McMahon and Dan), but here goes:

  • Have the necessary software be easy to install (no more than 1 programming language + a package install command). For example: Have Node.js v0.10+ and run npm-install; Or have php54+ and run composer-install; Or have Python2.7 and run tox; etc.).
  • Tests must be runnable against local dev MediaWiki installs with local/real browsers and no configuration other than MW_DB/MW_INSTALL_PATH or MW_SERVER/MW_SCRIP_PATH in the environment.
  • Tests must not rely on pre-existing state in the wiki. Any sample data it needs, it sets up and tears down as part of the run (Api query? Db query? MediaWiki maintenance script?).

Not requirements, but a few practices and ideas that come to mind:

  • Primary code coverage in unit tests, not browser tests. E.g. you wouldn't have a browser test assert large numbers of different inputs for the search suggestions dropdown. The OpenSearch interface would be unit tested in PHP for different queries and responses. The front-end logic would be unit tested with mock data in JavaScript. The browser test verifies with one or two inputs that the integrated code paths and UI work as expected. This also has the added bonus of speed.
  • Fast tests (ideally all) run before merging using local browsers on Linux (e.g. Chrome/Firefox) against a localhost install of MediaWiki + selected extensions (maybe use the same extension group as the phpunit mediawiki-extensions suite). Because external services (Beta, Saucelabs) should not influence the build, and also speed.
  • All tests run immediately post-merge on every commit against a similar localhost install using a wider set of browsers (via SauceLabs; external service is fine in post-merge.). One would presumably debounce builds so that there is no queue build-up during rush hour. Yet still immediate (not on a timer), and Jenkins would be able to associate regressions with one or more commits.
  • Non-destructive tests could be periodically run against a dozen beta or staging wikis, and production wikis; for smoke testing purposes against influence by gadgets, wmf configuration changes, and influence by misc extensions not having tests in the CI suite. Every 6-12 hours should be realistic.
Aklapper renamed this task from Design a TDD survey to Design a Test-Driven Development (TDD) survey.Mar 31 2015, 10:38 AM

@zeljkofilipin could be a good question for the survey? "Does Wikimedia have a QA team?" :)
The fact that so few engineers have joined this conversation is problematic in itself.

@zeljkofilipin could be a good question for the survey? "Does Wikimedia have a QA team?" :)

It is easy to answer, if you just take a look at the staff page. :)

greg triaged this task as Low priority.Sep 24 2015, 1:26 AM
greg moved this task from INBOX to Backlog (ARCHIVED) on the Release-Engineering-Team board.