Page MenuHomePhabricator

GSoC Epic: End-to-end test coverage for Abstract Wikipedia's Wikifunctions
Open, In Progress, MediumPublic

Assigned To
Authored By
Mar 29 2023, 9:27 PM
Referenced Files
F36939247: Wikifunctions-logo-en.svg.png
Apr 4 2023, 1:45 PM
F36938725: Screenshot 2023-04-04 072348.png
Apr 4 2023, 3:40 AM
F36938718: Screenshot 2023-04-04 084245.png
Apr 4 2023, 3:26 AM
F36938716: Screenshot 2023-04-04 084158.png
Apr 4 2023, 3:26 AM
F36935604: image3.png
Mar 31 2023, 3:50 PM
F36935602: image2.png
Mar 31 2023, 3:50 PM
F36935600: image1.png
Mar 31 2023, 3:50 PM
F36935598: outline-1.png
Mar 31 2023, 3:50 PM


Phabricator Task T328587

Mentors @SDunlap @Jdforrester-WMF

Proposal T333498 Google Doc

Wikifunctions-logo-en.svg.png (220×220 px, 11 KB)

An Overview of the Wikimedia Foundation

The mission of the Wikimedia Foundation is to "empower and engage people around the world to collect and develop educational content under a free license or in the public domain, and to disseminate it effectively and globally”. It is dedicated to developing and maintaining the technological infrastructure that supports its mission of providing free knowledge to the world. It has been participating in GSoC since 2006, and the program has been a valuable source of new contributors to its projects.

Table of contents:

Screenshot 2023-04-04 084158.png (1×732 px, 106 KB)
Screenshot 2023-04-04 084245.png (472×1 px, 51 KB)

Personal Details

Nikhil Mahajan
UTC+5:30 | IST
Working Hours ( as per IST )
12 P.M. - 4 A.M.
Github | Resume | Linkedin
Project Size
Medium | 175 hrs
Project length
Standard Coding Period ( 12 Weeks )
About Me

I am Nikhil Mahajan, a second-year student currently enrolled in a Bachelor of Technology program at the Indian Institute of Technology Roorkee. Additionally, I serve as a developer at the Coding Club in my college, MDG Space. I am an enthusiastic developer. I possess a passion for learning and exploring various technologies. My strong time management skills enable me to deliver work within established schedules consistently. With a fast-learning ability and an open-minded approach, I am committed to collaborating effectively in team projects. I prioritize honesty and integrity and remain focused on achieving goals and maintaining consistency in my work.

React | Javascript
Activity Leaderboard | Stock Simulator
CSS | Bootstrap
Activity Leaderboard
Django | Express
MongoDB | Firebase
Selenium | Webdriverio
Insta Scrapper
Python | CPP
Machine Learning
ML Related Work
Solidity Contracts
Development Environment


  • Windows 11 + WSL2
  • Access to an iMac (macOS 12.1, Intel Core i5)


  • Visual Studio Code

I have set up the mediawiki core and Wikilambda extension locally on my machine. The following snapshot shows that I am able to execute the e2e tests for Wikilambda:

image1.png (328×1 px, 51 KB)


Past Projects

  1. Stock simulator -
    • React | Chart.js | Bootstrap | CSS
    • It is a web app that simulates the stock market. It allows us to buy and sell stocks ( fake ) using virtual coins.
    • It also shows the stock history of various companies. It uses an api to collect stock data.
    • Manage the auth state in frontend using the useReducer hook and useContext hook.
    • I am a solo developer for the frontend of this application. It is a small freelance project.
  2. Activity leaderboard -
  3. Election Portal.
  4. I have a few mini projects as well which can be found on my github.

Selenium Experience

  1. Bot to scrape different social profile
    • Selenium webdriver | Python | Flask | Peewee
    • It is part of the Club Noticeboard:
      1. My college has many tech and cultural clubs.
      2. The only medium to convey information from these clubs to students is either social media or email.
      3. But students often missed them out among the bulk of other messages.
      4. It will scrape the different public insta profile as requested by the user and show only the post of that profile from different social handles.
      5. It is a group project and still in the development phase.
    • I have built the bot using selenium webdriver:
      1. The bot first logged into instagram.
      2. Navigate to a public account.
      3. Scrape all the posts.
      4. Relevant contributions: feat: Setup bot config, feat: Add insta login, feat: Add profile page, feat: Setup peewee for testing bot.

Contributions to Wikimedia

I have been making contributions to Wikimedia since January 31st, 2023.

Show Result for Non Ascii characters
It helps me to get familiar with gerrit and phabricator.
Disable button on location permission error
I learn how to find files in such a big codebase without knowing much about it.
Fix misaligned checkboxes
It helps me to get started playing around with wikifunctions.
Implementations on Tester pages should be links
I got to know about ZID.
Add PublishComponent into the DefaultView screen
I got to know about the default and editor views for the ZObject.
selenium-test: change assert to expect
I learned how the extension imports mediawiki core configurations.
tests for mixins/typeUtils new methods
Get familiar with the current approach to write jest tests.
Add documentation to new ZObject module methods
Make me familiar with writing comments to help further development.
Add assertion in selenium/function.js
I got to know better about the current status of e2e tests.
extensions/WikiLambdaCUJ 4tests: Add e2e tests for tester creationMicrotaskWIP
extensions/WikiLambdaCUJ 5tests: Add e2e test for create implementationMicrotaskWIP

About wikifunctions

I have been exploring the wikifunctions for a while and understood the following about it:

  • In Wikifunctions, all entities are represented as ZObjects. A ZObject may represent a function, implementation, testers or types. The creation of a ZObject results in the assignment of a unique Zid to it.
  • The ZObject with type Z8 represents functions. A function can have multiple implementations and testers. The implementations need to be approved. When the function is called, it uses any of the approved implementations.
  • The ZObject with type Z4 represents the type itself. Custom types can be created by using existing types as a foundation.
  • The ZObject with type Z14 represents the implementation. Implementations can be created using the existing functions or code that can be written in three languages: javascript, python, lua.
  • The ZObject with type Z20 represents the testers. The tester will run against all the available implementations.
General Questions

What is your Motivation for this project?

Google Summer of Code is an excellent opportunity to get familiar with open-source culture and get comfortable with collaboration and the trading of ideas and solutions. But beyond that, I wish to work on this project specifically because I find the concept of ZObjects, which represents a unique approach to data representation in Wikifunctions, really exciting. By contributing in the development of end-to-end tests, I will get an opportunity to examine the codebase of this vast ecosystem while crafting tests for distinct user journeys. Ultimately, I am eager to contribute to this project by utilizing my skills in software testing and development to guarantee the continuous success of Wikifunctions. I am motivated to contribute to this project not only for the technical challenges it presents but also for the prospect of creating meaningful connections with like-minded individuals.

Have you submitted any other proposals?

No, I have not applied to any other organizations or projects.


Overview of Wikifunctions

Wikifunctions is an innovative project that serves as a comprehensive repository of various functions that individuals can develop, maintain, utilize, and call upon. The platform provides an extensive catalog of code snippets for different programming requirements, which can be seamlessly incorporated into larger programs or utilized directly by calling them from the Wikilambda Wikimedia extension. By enabling individuals to share, develop, and maintain code, Wikifunctions facilitates seamless collaboration among developers, researchers, and other relevant stakeholders, and promotes the efficient utilization of programming resources.

Current Situation

Wikifunctions is currently undergoing an extensive phase of development. While unit and integration tests have been implemented to ensure the integrity of the code, a mechanism to verify if any changes in the patch cause disruptions to the user journey has not yet been established.

What needs to be done?

The GSoC project is a viable solution. It is about writing end-to-end tests for different critical user journeys in Wikifunctions. These tests will run against any patch review. The tests fail if the patch introduces changes that break down any of the user journeys. This will help avoid merging the patches that mistakenly introduce bugs into the project.

Interaction with Mentors

Stef Dunlap: We had a one-on-one conversation on a google meet where Stef clarified many of my doubts which were related to user journeys and the current status of end-to-end tests in Wikifunctions. We had conversations many times on the IRC channel. She had reviewed a few of my patches.

James D. Forrester: He had reviewed a few of my patches.


  1. End-to-end tests for all the CUJ.
  2. Improve existing GitLab Kubernetes based CI/CD pipelines.
  3. Smooth integration of Gitlab based CI with Gerrit.
  4. Integrate the tests into the pipeline so that it runs against any patch review on Gerrit.
  5. Complete documentation ( comments and website ) for the e2e tests to help further development.
  6. Biweekly blogs on project progress.



WikiLambda is an extension of the MediaWiki-core. The core already had e2e tests written using a JavaScript-based testing automation framework called WebdriverIO. The following snapshot from package.json of the core is evident:

"devDependencies": {
    "@wdio/cli": "7.16.13",
    "@wdio/junit-reporter": "7.16.13",
    "@wdio/local-runner": "7.16.13",
    "@wdio/mocha-framework": "7.16.13",
    "@wdio/spec-reporter": "7.16.13",


  • It provides a higher-level API and additional features on top of the Selenium WebDriver API, making it easier to write and maintain automated tests for web applications.
  • It supports Cross-Browser Testing i.e. automated tests can be run on multiple browsers, including Chrome, Firefox, and Safari making it easier to ensure the application works across different browsers. Selenium Standalone Service can be installed using the command npm install @wdio/selenium-standalone-service --save-dev. The following is a snapshot from wdio.config.js.
export const config = {
    services: [["selenium-standalone"]],
    capabilities: [
        { browserName: "chrome" },
        { browserName: "firefox" },
        { browserName: "MicrosoftEdge" },
  • It comes with a built-in assertion library that allows making powerful assertions on various aspects of the browser or elements within the application.
  • It provides built-in support for parallel testing. Setting the maxInstance option in wdio.config.js to 4 means that WebdriverIO will run up to four browser instances in parallel.
  • @wdio/mocha-framework provides describe/it blocks, which help organize the test suite. Each describe block can have a beforeEach and afterEach hook, which will be run before/after each it instruction. It also has before and after functions, which run a one-time set of instructions before/after the entire test suite.
  • Reporters generate test reports and provide feedback on the outcome of the tests. They generate output in various formats such as console output, HTML reports, and XML reports. WebdriverIO comes with several reporters:

spec: Prints test results to the console in a user-friendly format.
allure: Generates HTML reports that are interactive and easy to read.

All the above-mentioned comments make WebdriverIO a perfect choice for WikiLambda as well. Some e2e tests are already written in WikiLambda using the same.


What if tests are designed to be interdependent:

Design the tests to be interdependent means building the e2e tests on top of one another:

  1. CUJ-2: Test A will create a function called funcA.
  2. CUJ-5: Test B will create the implementation called implA for funcA.
  3. CUJ-4: Test C will create the tester called testerA for funcA.
  4. CUJ-6: Test D will connect implA and testerA to funcA.
  5. CUJ-1: Test E will evaluate the function funcA ( created by Test A ).
  6. CUJ-3: Test F will edit the function funcA.


  • The presence of interdependencies among tests can be clearly seen.
  • Series testing: Test cases can be executed in a specific order, and any dependencies between test cases can be easily managed. The above test cases need to be executed in series in the order: Test A -> B -> C -> D -> E -> F.
  • Failure in one test will cause failure in subsequent tests.
  • This will also make challenging to pinpoint the specific cause of the test failure.
  • Increased complexity: Having tests depend on each other can make the testing code more complex and harder to maintain. Changes to one test can potentially affect other tests, making it harder to isolate and fix issues.
  • Tests cannot be executed parallely which increases the time for the overall testing process.


Design the tests that are independent of one another.

  • Debugging: When a test fails, it's much easier to identify the problem when the test is independent. You can quickly identify which test failed and why, rather than having to dig through multiple tests to find the root cause.
  • Parallel execution: Independent tests can be run in parallel, which can significantly reduce the overall test execution time.
  • Flexibility: Independent tests are more flexible and easier to modify. You can change the order in which they are run, add new tests, or remove existing ones without worrying about the impact on other tests.
  • Accuracy: Independent tests are more accurate as they are not impacted by any side effects caused by previous tests. This means that you can rely on the results of each test to accurately reflect the state of the system under test.

The following CUJ code samples have been implemented keeping the tests independent.

General Discussions

Locator strategies

Locator strategies refer to the techniques used to locate elements on a web page for automation testing purposes. The code is undergoing frequent changes. The e2e tests may break down easily if weak locator strategies are used. Therefore, it is important to locate the elements in the browser based on the unique characteristic of the element so that e2e tests do not break down. It will ensure reliable and sustainable testing. WebdriverIO supports a variety of locator strategies:

  1. CSS Query Selector.
  2. Element with certain text or containing certain text.
  3. XPath.
get pageFirstHeading() { return $('#firstHeading'); }
get labelsTable() { return $('table[aria-label="Labels"]').$('tbody'); }
get functionCallBlock() { return $('div[aria-labelledby="ext-wikilambda-ztester_call-label"]'); }
get validateBlock() { return $('div[aria-labelledby="ext-wikilambda-ztester_validator-label"]'); }
get publishButton() { return $('button=Publish'); }
get publishButtonDialog() { return $('#publish-dialog').$('button=Publish'); }
get ImplementationStatus() { return $('span.ext-wikilambda-tester-result__footer-status'); }

 * @param {number} rowNumber should be a positive integer
 * @return row of the Labels Table
getRowFromLabelsTable(rowIndex) {
	return this.labelsTable.$$('tr')[rowIndex];

getLabelFromLabelsTable(rowNumber) {
	return this.getRowFromLabelsTable(rowNumber).$$('td')[1].$('input.ext-wikilambda-zstring');


Assertions are statements that evaluate whether an expected outcome or behavior of an application under test matches the actual outcome or behavior during test execution.

When to write assertions:

  1. Write assertions to ensure that the expected page has loaded.
  2. When testing user interactions such as clicking on a button or entering text into a field, write assertions to ensure that the expected behavior occurs.
  3. When using waits, write assertions to ensure that the expected element is visible or has a specific value after the wait.
  4. When testing form submission like publish new or edited ZObject write assertions to ensure that the expected form submission behavior occurs: success message appearing.

Flaky tests

Tests that produce inconsistent results ( pass in one run and fail in another run ) when executed repeatedly, without any changes to the code or application. It can lead to false positives or false negatives. Some ways to avoid writing flaky tests:

  1. Ensure test environment consistency.
  2. Using explicit waits helps to ensure that the test waits for a specific element to be available before executing the next step. This helps to avoid situations where a test tries to interact with an element that has not yet loaded, resulting in a flaky test.
  3. Use stable selectors that do not change frequently. Avoid using selectors that are likely to change due to UI updates.
  4. Keep tests focused and independent.
  5. Regularly review and maintain your test suite. This includes reviewing logs and test results to identify patterns and root causes of flakiness.

Example ZObjects to be used for e2e tests

  1. Create, evaluate and edit function: if or Z802 because this function has a more complex input argument structure so it helps to test many things at a time.
  2. Create, edit type: Argument declaration or Z17 because this type has more complex keys structure.
  3. Create, edit implementation: Implementation for Boolean equality or Z844 because this function has both code and composition implementation.
  4. Create, edit tester: Tester for if or Z802.

Folder Structure

Current directory structure:

  • selenium/ contains all e2e tests files.
  • pageObjects/ contain files that have a separate javascript class for each page or section of a web application. This class contains methods for interacting with the page elements. This pattern is to separate the test code from the page-specific code, making it more maintainable, readable and reusable.
  • specs/ contain files that have a set of test cases that WebDriverIO will execute. The framework (Mocha) looks for test files in this folder by default.
  • The wdio.config.js is a configuration file used by the WebDriverIO to set up the testing environment and define various settings for running tests.
  • componentObjects/ contains page components or UI elements that are shared across multiple pages of the application. For example:
  • Publish Dialog Box

image2.png (213×1 px, 16 KB)

  • Labels Block

image3.png (477×775 px, 41 KB)

Propose Changes:

  • Currently, pageObjects/ contain files named ( should be renamed to as it represents function page in view mode ), ( it represents function page in create or edit mode ) which are related to function pages. These two files should be grouped inside a folder functions/. It would be then easier for developers to locate the files related to function pages.
  • utils/ contain utility functions or helper modules that can be reused across the test suite. These utility functions are not specific to a particular page or test case.

Screenshot 2023-04-04 072348.png (883×1 px, 152 KB)

- tests/
    |- selenium/
            |- PublishDialogBox.component.js
        |- pageobjects/
            |- functions/
            |- implementations/
            |- testers/
            |- types/
        |- specs/
            |- function.spec.js 
	    |- implementation.spec.js 
            |- test.spec.js
            |- type.spec.js
        |- utils/
            |- basic.util.js
        |- wdio.conf.js


<Page>View: Represent page in View mode.
<Page>Form: Represent page in create or edit mode.
The files <Page>View and <Page>Form are grouped inside <Page>/.
function.spec.js: create, edit and evaluate the function.
test.spec.js: create tester, connect tester to function.
implementation.spec.js: create implementation, connect implementation to function.
type.spec.js: create, edit type.

Critical User Journeys

There are several user journeys in Wikifunctions and the following mentioned CUJ are the critical ones.

Pre Specifications
  • The base URL is http://localhost:8080/wiki.
  • The following steps are repetitive and will be referred as Publish and Confirm in the proposal:
    1. Click on the Publish button.
    2. Click on the Publish button in the dialog box.
    3. Explicitly wait until timeout.
    4. Confirm that Publish is successful.
  • Login to Wikifunctions: The test will go through the following steps:
    1. Open the` /wiki/Special:UserLogin` in the browser.
    2. Enter the username and password.
    3. Hit the Login button.
    4. Explicitly wait and Confirm login is successful.

Sample Code:

const LoginPage = require("wdio-mediawiki/LoginPage.js");

it("should be able to login", async function () {
    await LoginPage.loginAdmin();
    // If successful, redirected to Main Page
    await expect(browser).toHaveUrlContaining("Main_Page");
  • Randomly generated labels should be used while writing tests because If we use a fixed label ( for example: ‘e2e-zobject’ ) then test got failed on running more than once locally in the machine as wikifunctions does not allow to create two ZObjects of same type with same label. This will also help in avoiding the clashing of labels during multiple retries of tests.

Sample Code:

function getRandomLabel(prefix) {
    return `${prefix}-${}` 

The function should be written inside utils/basic.util.js.

  • <ZObjectTest> is the general ZObject used in any particular e2e test.
CUJ 1: Evaluate a function

The test will go through the following steps:

  1. Open the /wiki/Special:ListZObjectsByType.
  2. Find function type ( Z8 ) link and click on it.
  3. Find the label of &lt;ZObjectTest> and click on it.
  4. Set the input values according to the function.
  5. Click on the call function button.
  6. Confirm that the results are shown.

Sample Code:

const ListZObjectsByType = require("../pageobjects/"),
    FunctionPage = require("../pageobjects/functions/");

it("should evaluate a function", async function () {
    const ListFunctions = await ListZObjectsByType.openFunctionsList();
    await ListFunctions.openFunction("<ZObjectTest");
    await expect(await ListFunctions.FunctionTitle).toHaveText(
        "function label"
    await FunctionPage.callFunction();
    // call the function with valid type and input
    await expect(FunctionPage.result).toHaveText("input", {
        message: 'The response should be "input"',

Current Status:

All the work specified in CUJ 1 has been done.


We can extend the test to evaluate the function using different implementations:

  1. Disapprove all the approved implementations.
  2. Check if the call function button is disabled or not.
  3. Now approve any one of the implementations.
  4. Again evaluate the function.
CUJ 2: Create a function definition

The test will go through the following steps:

  1. Open the /wiki/Special:CreateZObject?zid=Z8.
  2. Set the label field to a unique value.
  3. Set the input type and label.
  4. Click on Add another input button to add more inputs and repeat the 3rd step.
  5. Set the output type and label.
  6. Publish and Confirm.

Sample Code:

const CreateFunctionPage = require("../pageobjects/functions/"),
    FunctionPage = require("../pageobjects/functions/");

describe("Function creation", function () {
    it("create a function", async function () {
        await CreateFunctionPage.setLabel("Add two numbers");
        await CreateFunctionPage.setInput("Number", "num1"); // Type, Label
        await CreateFunctionPage.addAnotherInput();
        await CreateFunctionPage.setInput("Number", "num2");
        await CreateFunctionPage.setOutput("Number"); // Type
        await CreateFunctionPage.publish();
        // If successful, redirected to /wiki/<ZObjectTest>?success=true
        await expect(browser).toHaveUrlContaining("success=true");
    it("should display the function label", async function () {
        await expect(await FunctionPage.label).toHaveText("Label");
        // assertions to check for label in another language
    it("should display input and output type", async function () {
        // assertions to check that input and output types are set correctly

Current Status:

All the work specified in CUJ 2 has been done.


  1. After the function is created, there are assertions to check labels and arguments. Another assertion can be added to check the input and output types. I have already raised the patch for the same.
CUJ 3: Edit a function definition

The test will go through the following steps:

  1. Open the /wiki/&lt;ZObjectTest>.
  2. Click on the edit button.
  3. Edit the function label and input labels.
  4. Publish and Confirm.
  5. Repeat 1st and 2nd step.
  6. Edit the function definition.
  7. Publish and Confirm.

Sample Code:

const FunctionPage = require("../pageobjects/functions/"),
    EditFunctionPage = require("../pageobjects/functions/");

describe("edit function", function () {
    it("should be able to edit label", async function () {
        await FunctionPage.editFunction();

        await EditFunctionPage.editFunctionLabel();
        await EditFunctionPage.editInputLabel();
        await EditFunctionPage.publish();
        await expect(browser).toHaveUrlContaining("success=true");
    it("should display the edited label", async function () {
        await expect(await FunctionPage.labels).toHaveText("edited label");
        // assertions to check other labels
    it("should be able to edit function definition", async function () {
        await FunctionPage.editFunction();
        await EditFunctionPage.editFunctionDefinition();
        await EditFunctionPage.publish();
        await expect(browser).toHaveUrlContaining("success=true");
    it("should display the function with edited definition", async function () {
        // assertions to check updated function definition

Current Status:

Only editing the function by removing labels.


  1. Change the label.
  2. Add more labels.
  3. Edit the function definition.
  4. Add and remove names in other languages.
CUJ 4: Create testers

The test will go through the following steps:

  1. Open the /wiki/&lt;ZObjectTest>.
  2. Go to the details tab.
  3. Click on Create a new test.
  4. Set the label field to a unique value.
  5. Set the function with the label of &lt;ZObjectTest>.
  6. Set the function inputs.
  7. Set the validation function and its input accordingly.
  8. Publish and Confirm.

Sample Code:

const FunctionPage = require("../pageobjects/functions/"),
    CreateTesterPage = require("../pageobjects/testers/"),
    TesterPage = require("../pageobjects/testers/");

describe("create test", function () {
    it("should be able to create test", async function () {
        await FunctionPage.switchToDetailsTab();
        await FunctionPage.openCreateTest();
        await CreateTesterPage.setLabel("creating-test");
        await CreateTesterPage.setFunction("<ZObjectTest>");
        await CreateTesterPage.setFunctionInput();
        await CreateTesterPage.setResultValidation();
        await CreateTesterPage.publish();
        await expect(browser).toHaveUrlContaining("success=true");
    it("should display the label", async function () {
        await expect(await TesterPage.label).toHaveText("Label");
        // assertions to check for label in another language
    it("should display function block", async function () {
        // assertions to check for the function block
    it("should display result validation block", async function () {
        // assertions to check for the result validation block


  1. However, the CUJ does not mention to edit the tester but it can be extended to edit the tester as well.

CUJ 5: Add implementations via code or composition

The test will go through the following steps:

  1. Open the /wiki/&lt;ZObjectTest>.
  2. Go to the details tab.
  3. Click on Create a new implementation.
  4. Set the label field to a unique value.
  5. Add via Composition:
    1. Confirm that composition is selected by default.
    2. Set the function input to the already existing function.
    3. Set the arguments accordingly.
    4. Publish and Confirm.
  6. Repeat the 1st, 2nd, 3rd, and 4th steps.
  7. Add via Code:
    1. Select Code.
    2. Select programming language.
    3. Input the code to the code block.
    4. Publish and Confirm.

Sample Code:

const FunctionPage = require("../pageobjects/functions/"),
    CreateImplementationPage = require("../pageobjects/implementations/"),
    ImplementationPage = require("../pageobjects/implementations/");

describe("create implementation", function () {
    describe("create implementation via composition", function () {
        it("should be able to create", async function () {
            await FunctionPage.switchToDetailsTab();
            await FunctionPage.openCreateImplementation();

            await CreateImplementationPage.setLabel(
            await CreateImplementationPage.selectComposition();
            await CreateImplementationPage.setFunction();
            await CreateImplementationPage.setArguments();
            await CreateImplementationPage.publish();
            await expect(browser).toHaveUrlContaining("success=true");
        it("should display labels", async function () {
            await expect(await ImplementationPage.getHeading).toHaveText(
            // assertions to check for other labels
        // assertions to check for implementation definition
    describe("create implementation via code", function () {
        it("should be able to create", async function () {
            await FunctionPage.switchToDetailsTab();
            await FunctionPage.openCreateImplementation();

            await CreateImplementationPage.setLabel("implementation-via-code");
            await CreateImplementationPage.selectCode();
            await CreateImplementationPage.selectLanguage("python");
            await CreateImplementationPage.writeCode(code);
            await CreateImplementationPage.publish();
            await expect(browser).toHaveUrlContaining("success=true");
        it("should display labels", async function () {
            await expect(await ImplementationPage.getHeading).toHaveText(
            // assertions to check for other labels
        // assertions to check for implementation definition


  1. However, the CUJ does not mention to edit the implementation but it can be extended to edit the implementation via code and composition as well.
CUJ 6: Connect an implementation or tester to a function

The test will go through the following steps:

  1. Open the /wiki/&lt;ZObjectTest>.
  2. Go to the details tab.
  3. Connect implementation
    1. Search for the implementation under the implementations table.
    2. Check the box in front of it.
    3. Click on the approve button.
    4. Confirm that the state changed to Approved.
  4. Connect test
    1. Search for the test under the test cases table.
    2. Check the box in front of it.
    3. Click on the approve button.
    4. Confirm that the state changed to Approved.

Sample Code:

const FunctionPage = require("../pageobjects/functions/");

describe("connect implementation", function () {
    it("should be able to connect implementation", async function () {
        await FunctionPage.switchToDetailsTab();

        await FunctionPage.searchImplementation();
        // search for implementation created for ZObjectTest
        await FunctionPage.implementationCheckBox();
        await FunctionPage.implementationApprove();
        // click on Approve button under Implementations
        await expect(FunctionPage.implementationState).toHaveText("Approved");
describe("connect tester", function () {
    it("should be able to connect test", async function () {
        await FunctionPage.searchTest(); // search for test created for ZObjectTest
        await FunctionPage.testCheckBox();
        await FunctionPage.testApprove(); // click on Approve button under Test cases
        await expect(FunctionPage.testState).toHaveText("Approved");
CUJ 7: Create and edit types
  1. Open the /wiki/Special:CreateZObject?zid=Z4.
  2. Set the label field to a unique value.
  3. Click the plus button under the keys section.
  4. Set the type, key id and label.
  5. Set the validator accordingly.
  6. Publish and Confirm.
  7. Let the newly created type being referred as &lt;ZObjectTestType>
  8. Edit type:
    1. Open the /wiki/&lt;ZObjectTestType>.
    2. Click on the edit button.
    3. Change the label, type definition.
    4. Publish and Confirm.

Sample Code:

const CreateTypePage = require("../pageobjects/types/"),
    TypePage = require("../pageobjects/types/"),
    EditTypePage = require("../pageobjects/types/");

describe("create type", function () {
    it("should be able to create type", async function () {
        // will redirect to /Special:CreateZObject?zid=Z4
        await CreateTypePage.setLabel();
        await CreateTypePage.addKeys();
        await CreateTypePage.setValidator();
        await CreateTypePage.publish();
        await expect(browser).toHaveUrlContaining("success=true");
    it("should display labels", async function () {
        await expect(await TypePage.getHeading).toHaveText("label");
        // assertions to check for other labels
    // assertions to check for type definition
describe("edit type", function () {
    it("should be able to edit type", async function () {
        await TypePage.edit();

        await EditTypePage.changeLabel();
        await EditTypePage.publish();
        await expect(browser).toHaveUrlContaining("success=true");
    it("should display labels", async function () {
        await expect(await TypePage.getHeading).toHaveText("label");
        // assertions to check for other labels
    // assertions to check for edited type definition

CI/CD Pipeline

Current situation

Kubernetes-based GitLab CI pipeline has been setup to automatically build and test code changes that are submitted to Gerrit for review. The application along with changes introduced also gets deployed to the kubernetes cluster and hence one can indeed interact with it and test it in a production-like environment ( example: ). The kubernetes environment created is cleared after specific time to reuse the resources. The .gitlab-ci.yml is available here.

  1. The stage section defines the stages of the pipeline. There are 5 stages having 7 jobs in total as follow:
    1. configure
      • configure: Installing dependencies, configuring database connections, and setting up environment variables. This job uses the yq tool to replace the placeholder values in the values.yaml file with the actual values of the MediaWiki URL and Wikilambda reference.
      • Setup-web-proxy: This job uses the mkwebproxy script to create the web proxy and requires the K3S_OPENRC and files to be present.
    2. deploy
      • deploy: Deploying the application to a production-like environment ( kubernetes cluster ). This job uses the Helm package manager to install the environment using the configuration file created by the configure job.
    3. test
      • test: Running automated tests ( e2e tests ) on the application. This job uses the Helm package manager to run the tests and retrieve the test logs.
    4. report
      • [ report failure | report success ]: Reporting the [ failure | success ] of the pipeline. Both jobs use the curl command to make an API call to Gerrit with the appropriate message and labels.
    5. delete
      • delete: Deleting resources ( pods, services, ingress ) that were created as part of the deployment process.
  2. The variable section defines variables that are used throughout the pipeline. Some of them are user-definable, such as the Gerrit change number for the patch review or the reference of the patch under review, while others are system-defined, such as the Git submodule strategy, the namespace for the project, the release name, and the URL of the MediaWiki instance.
  3. The workflow section includes a name for the pipeline based on the Gerrit change number and the action being performed (either "run" or "stop"). The rules for the workflow specify that the pipeline should be triggered either by a trigger or by a web event.


  1. The file does not contain any error handling logic, which could lead to issues if a job fails or an unexpected error occurs. Adding error handling would help catch issues and prevent the pipeline from continuing to run if something goes wrong.
  2. There is some repetition between the deploy and test jobs, such as the KUBE_CONTEXT variable and the image used. It would be better to use GitLab's extends feature to define a common template for these jobs, and then reference it in each job. This would make the pipeline definition easier to read and maintain.
  3. Currently, the test job runs the same test in the same environment every time. It would be more useful to test the patch on a matrix of different environments, such as different versions of the MediaWiki software or different Kubernetes versions. This could be achieved by defining a matrix of variables at the top level of the file, and then using them in the test job.
  4. The test job currently outputs its results to the console, which is not very useful for debugging. It would be better to output the results to a file, and then use GitLab's artifact functionality to save the file and make it available for download.


Writing good technical documentation and comments in the code is essential to help others understand the code and contribute to it. Following are some ways to write effective documentation and comments:

  1. Use clear and concise language to explain the code. Avoid using technical jargon or complex sentences that can be difficult to understand.
  2. Provide examples to illustrate how the code works in practice.
  3. Explain the purpose of the code. This can help others understand why the code exists and how it is relevant.
  4. Do not wait until the end to write documentation and comments. Add them when writing the code to ensure that everything is documented properly.
  5. Use consistent formatting for comments and documentation to make it easier to read and understand.
  6. Use descriptive names for variables, functions, and other components of the code. This can help others understand what each component does and how it fits into the code.
  7. If made a specific decision when writing the code, explain why that decision was made. This can help others understand the approach and make better contributions to the project.


Community Bonding Period
4 May - 28 May
Discuss with the mentor about the project plan.
Get more familiar with the current Gitlab based CI pipeline in Wikifunctions and discuss improvements.
Consult with mentors about the guidelines to write documentation in Wikifunctions.
Learn any new tech if required after discussions.
Write the blog about the experience in the community bonding period and conclusions of various discussions. Reveal the final project plan.
Coding Phase Started
29 May - 4 June
Evaluate function.
Unavailable 8th June - 11th June
5 June - 18 June
Create, edit function definition, write blog about the progress.
19 June - 25 June
Implement CUJ 4 ( Create, edit testers )
26 June - 2 July
Implement CUJ 5 via composition ( Create, edit Implementation via composition ).
3 July - 9 July
Write Documentation, Buffer period to finish up any pending tasks, write blog.
Mid Term Evaluation
10 July - 14 July
Submit Mentor evaluation, Mentor will submit my evaluation.
Final Phase
15 July - 21 July
Implement CUJ 5 via code ( Create, edit Implementation via code ).
22 July - 28 July
Implement CUJ 6 ( Connect implementation, tester to function ).
29 July - 4 Aug
Implement CUJ 7 ( create,edit types ), write blog.
5 Aug - 11 Aug
Improve current CI pipeline.
12 Aug - 18 Aug
Complete Improvements on CI pipeline, integrate e2e tests in pipeline.
18 Aug - 21 Aug
Complete Documentation, write blog about the project conclusion and experience in the GSoC.
Final Evaluation
21 Aug - 28 Aug
Buffer period to finish up any pending tasks, Polish up the project, Submit Mentor Evaluation.
28 Aug - 4 Sep
Mentor will submit my final evaluation.

I will be unavailable from 8th June to 11th June due to personal reasons. I do not have any other major commitments during the rest of the GSoC period. I have my college’s summer vacation starting from 4th May and ending on 14th July. Even when I have my classes (starting from 14th July) it is not more than 8 hrs a week and hence I will be able to provide more than sufficient time to GSoC. I have designed my timeline including all these engagements.

Post GSoC

I intend to keep doing my part in improving the vast system that is the Wikimedia foundation post GSoC. The skills I acquire during this period will help me contribute to other open-source communities too. And, of course, I will continue working on this project post the GSoC timeframe: polishing it, optimizing it, maintaining it. I would also be happy to help mentor the next generation of open-source enthusiasts interested in this project and wikifunctions in general.

Event Timeline renamed this task from WIP to [Proposal]: End-to-end test coverage for Abstract Wikipedia's Wikifunctions.Mar 29 2023, 10:22 PM updated the task description. (Show Details)

@SDunlap could u have a look over my proposal.

@Jdforrester-WMF could u provide me with ur valuable feedback. I have submitted my proposal on the gsoc website as well. Please do not ping people without including content/question/request - it's not a good use of anybody's time to guess what's wanted. Thanks a lot.

Hi, as the deadline for GSoC is quickly approaching in less than 48 hours (April 4th, 2023, 18:00 UTC), it's crucial that you submit your proposal on Phabricator and Google's program website in the recommended format as soon as possible. To avoid any potential last-minute rushes or server failures, we highly recommend that you submit your proposal early and keep updating it as needed before the deadline. Once you have submitted your proposal, please move it from the "Proposals in Progress" column to the "Proposals Submitted" column on the Phabricator workboard by simply dragging it. If you have any inquiries, please do not hesitate to ask. Good luck with your application!

Jdforrester-WMF renamed this task from [Proposal]: End-to-end test coverage for Abstract Wikipedia's Wikifunctions to GSoC Epic: End-to-end test coverage for Abstract Wikipedia's Wikifunctions.May 9 2023, 11:35 AM

Change 920343 had a related patch set uploaded (by; author:

[mediawiki/extensions/WikiLambda@master] e2e: Add tests for "create a new test"

Change 923582 had a related patch set uploaded (by; author:

[mediawiki/extensions/WikiLambda@master] e2e: Add tests for "Create implementation"

Jdforrester-WMF changed the task status from Open to In Progress.Aug 10 2023, 4:55 PM
Jdforrester-WMF triaged this task as Medium priority.