Page MenuHomePhabricator

Create a test runner for end-to-end API tests (Phester)
Closed, InvalidPublic3 Estimate Story Points

Description

Requirements:

  • The runner can be invoked from the command line
  • Required input: the base URL of a MediaWiki instance
  • Required input: one or more directories to scan for test specs.
  • The test runner executes each test case against the given API and reports any failures to comply with the expected results

Functional outline:

  • Find all test definitions (including fixture definitions). Build a map of names to files.
  • Resolve dependency graph, determine execution order.
  • Run the test suites in sequence, according to the order determined above.
    • Run the test cases within a suite in any order, or in parallel.
      • Execute the requests within each suite in sequence.
  • While executing tests, maintain a map of global variable values.

Rationale for a declarative approach of defining tests (in YAML):

  • not bound to a specific programming language (PHP, JS, python)
  • keeps tests simple and "honest", with well defined input and output, no hidden state, and no loops or conditionals
  • Easy to parse and process, and thus to port away from, or to use as a front-end for something else.
  • YAML is JSON compatible. JSON payloads can just be copied in.
  • Creating a good DSL is hard, evolving a DSL is harder. YAML may be a bit clunky, but it's generic and flexible.

Additional notes and considerations:

  • The test runner should be implemented in PHP. Rationale: It is intended to run in a development environment used to write PHP code. Also, we may want to pull this into the MediaWIki core project via composer at some point.
  • Use the Guzzle library for making HTTP requests
  • The test runner should not depend on MediaWiki core code.
  • The test runner should not hard code any knowledge about the MediaWiki action API, and should be designed to be usable for testing other APIs, such as RESTbase.
  • The test runner should ask for confirmation that it is ok for any information in the given target wiki to be damaged or lost (unless --force is specified)
  • Fixtures (known system state) are created by running a number of API requests in sequence. These requests are specified exactly in the same way as tests.
    • We may want to make some boiler plate setup re-usable, to avoid copying the same tests over and over.
  • A test can consist of several requests. All requests of one test are run in sequence, but several tests may run in parallel. Output generation should be designed in a way that accommodates this and avoids garbling.
  • No cleanup (tear-down) is performed between tests. The entire target wiki is expected to be discarded after the test run is complete.
  • We will need a way to declare fixtures - that is, specify sequences of requests that need to be run, and a way for tests to declare what fixtures they depend on. The fixtures must be run before the test, but only once.
    • Tests must not modify resources generated by such global fixtures. They must themselves create any resources they intend to modify.

Rough implementation roadmap:

  1. Baseline: requirements per above, except
    1. no scanning, test files are listed as command line arguments
    2. no support for fixtures
    3. no support for variables
    4. no regex matches
    5. only plain text console output (or ANSI, if we want to be fancy)
  2. Add support for regex matches
  3. Add support for variables
  4. Add support for fixtures and variable export.
    1. Execution order becomes relevant, needs dependency resolution.
  5. Add recursive directory scan
  6. Add ability to filter tests by tags
  7. Add test discovery based on extension.json files
  8. Add JSON output
  9. Add HTML output
  10. Add support for parallel test execution

Related Objects

StatusSubtypeAssignedTask
OpenNone
InvalidNone
InvalidNone
InvalidNone
InvalidClarakosi
InvalidClarakosi
InvalidNone
InvalidClarakosi
InvalidNone
ResolvedClarakosi
ResolvedClarakosi
InvalidClarakosi
InvalidNone
InvalidNone
InvalidNone
InvalidNone
InvalidNone
InvalidNone
InvalidNone
ResolvedClarakosi
InvalidNone
InvalidNone
InvalidNone
InvalidNone
InvalidNone
InvalidClarakosi
InvalidNone
InvalidNone
InvalidNone

Event Timeline

daniel created this task.Apr 2 2019, 2:51 PM
daniel updated the task description. (Show Details)Apr 2 2019, 3:32 PM
hashar awarded a token.Apr 3 2019, 5:51 PM
hashar added a subscriber: hashar.
greg added a subscriber: greg.Apr 3 2019, 8:38 PM
WDoranWMF set the point value for this task to 3.Apr 15 2019, 3:12 PM
hashar removed a subscriber: hashar.Apr 15 2019, 4:04 PM
daniel renamed this task from Create a test runner for end-to-end API tests to Create a test runner for end-to-end API tests (Phester).Apr 16 2019, 1:45 PM
daniel updated the task description. (Show Details)
daniel updated the task description. (Show Details)
daniel updated the task description. (Show Details)Apr 16 2019, 2:30 PM
Eevans added a subscriber: Eevans.May 14 2019, 6:18 PM

Additional notes and considerations:
[ ... ]

  • The test runner should not hard code any knowledge about the MediaWiki action API, and should be designed to be usable for testing other APIs, such as RESTbase.

I'd like to strongly +1 this, and add that it should likewise be possible to use for availability monitoring (like we currently do with operations/software/service-checker). A framework as abstract as the one defined here would be reusable across the organization wherever functional/integration tests are needed.

greg removed a subscriber: greg.Jun 11 2019, 6:10 AM
daniel updated the task description. (Show Details)Jul 8 2019, 3:55 PM
CCicalese_WMF removed Clarakosi as the assignee of this task.Jul 15 2019, 9:04 PM
CCicalese_WMF triaged this task as Medium priority.Jul 16 2019, 3:56 AM