Page MenuHomePhabricator

Wikimedia Technical Conference 2019 Session: Continuous Delivery/Deployment in Wikimedia: The Future of the Deployment Pipeline
Closed, ResolvedPublic

Assigned To
Authored By
debt
Oct 4 2019, 3:30 PM
Referenced Files
F31063450: T234641-nice to have.JPG
Nov 12 2019, 10:05 PM
F31063446: T234641-action items.JPG
Nov 12 2019, 10:05 PM
F31063444: T234641-must have.JPG
Nov 12 2019, 10:05 PM
Tokens
"Barnstar" token, awarded by Addshore."Burninate" token, awarded by Theklan."Meh!" token, awarded by zeljkofilipin."Pterodactyl" token, awarded by brennen."Like" token, awarded by akosiaris.

Description

Session

  • Track: Deploying and Hosting
  • Topic: Continuous Delivery/Deployment in Wikimedia: The Future of the Deployment Pipeline

Description

Quick check in on the state of affairs in the Pipeline work, and more interestingly what are the plans, upcoming challanges and expected timelines


Post-event summary:

Important set of requirements:

  • Should be fast
  • Make the various tests configurable/gateable so that only a subset can be run if required.
  • Mediawiki is missing and is needed.

Post-event action items:

  • Investigate how to approach the mediawiki thing.
  • Identify which parts of the Add a wiki process are related to the deployment pipeline
  • Integration tests support should be added.

Session Attendees
Piotr, James, Amir A., Lars, Jeena, Brennen, Nick, Florian, Guisseppe, [name], [name], ...

Notes:

  • T: Check-in on state of affairs on the deployment pipeline work; walk-through on what exists, and we want your feedback on what we haven't considered.
  • [1] What it is.
    • Repeatable way to build, test, promote, release software - currently implemented using Jenkins, Groovy, and a lot of duct-tape®
    • Insufficient for self-serve, but component for self-serve CI and continuous delivery/deployment
  • Goals: Get people familiar with current state. Get feedback
  • Stats: 15 projects (services) in production from the pipeline (and testing), 4 more (19 total) using for testing
  • Encouraging, but since almost all are in a single language / environment, a lot remains to be done.
    • Kask (session management service) is written in Golang.
    • Blubber itself is written in Golang and tested in its own pipeline.
    • So, overall, we're using it for projects in Node and Golang.
  • CI Abstractions:
    • .pipeline/blubber.yaml - requirements, tests, artifacts
    • .pipeline/config.yaml - how tests run
      • e.g. run linting stage in parallel with testing
      • define tests you want to run and
    • helm
    • deployment-charts
  • A: Based on k8s. Created "YAML Engineering" though... Instead we've adopted Helm. Template things only once, using if/else/for to make somewhat re-usable
  • deployment-charts is in gerrit, releases are at https://releases.wikimedia.org/charts
  • helmfile.d contains things like non-secret keys, API keys to external services, e.g. Google Translations
  • The glue between what is being built, and what is being deployed, is not yet created, and that's what we want your input on.
  • T: Wanted to give an overview of moving a service to the pipeline
  • [SEE SLIDES OF CODE EXAMPLES]
  • 1. define a test entrypoint- .pipeline/blubber.yml
    • take our base node.js image, run npm install on package.json in place, and finally npm test
    • similar to how travis works, etc.
  • 2. tell the pipeline to test - .pipeline/config.yml
  • 3. Let's add a linting as well
  • 4. Execution graph - run in parallel, for example. Could run a directed graph of dependencies to build, test, and publish your artifacts.
  • Today:
  • Everything in this example can be done right now
  • Gettig it into CI involves poking Service Ops
  • Future: Future of Continuous Integration WG; has picked ArgoCI
  • Shortcomings / Known unknowns:
    • Integration tests
    • Language support
    • Security embargoes/patches - known issue
    • MediaWiki support
  • What's needed? What are the unknown unknowns? "You have a project, what's missing in the pipeline to make it happen?"

Group Stage Left

  • Postits:
  • LZ: SSH into build
  • AS: Build images from base images other than those not in WMF registry? -- *No*. It's a security issue
  • AS: Can I publish to other docker registries (than wmf?) -- Probably
  • Does this affect new wiki creation? -- because someone said it's related, and i don't know how... Considerations about sharding perhaps?
  • Does this affect configuration changes such as CommonSettings.php? -- yes, it'l affect how they're done and deployed. Ask James.
  • Can this help restore daily location updates? -- Yes!
  • Great documentation
  • Comprehensibility
  • Speed
  • Can the pipelines be triggered from places/event other than gerrit merges?
  • Is the pattern 1 pipeline per Git repo? (multiple artifacts)??? -- No
  • How does a non-English speakers get an idea deployed? -- That's more about the social/political decision of whether or not to deploy something. But kinda relevant because making this process better known.
    • How do we define a simple way to understand what is needed for a new feature/tool to be deployed? -- a human-readable page, translated, will make it more possible.
  • New languages and wm-projects: we need a way more simple path to be live.
  • I want it to deploy when i merge to master
  • I want it to deploy my change to a test environment after running tests?
  • Is pipeline multi branch? or only master or configurable?
  • How are we going to deploy faster and more efficiently?
  • How do we rollback automatically?
  • Are there one or multiple pipelines per git repo? -- multiple
    • Can we build our own pipeline? -- ... in theory?
  • Pipeline for stuff to Toolforge would be nice. -- But doesn't use the same images. :-/
  • AS: At the moment in the production docker registry that are only used in production. localcharts etc. Whereas labs has an image for multiple PHP versions, etc.
    • Liw: [clarifications?]
    • AS: either the pipeline has to open up to using those other images, or something else has to open up to using the same process
    • JH: blubber in the mediawii docker registry ...
    • How many docker registries are there? -- just 2.
    • ISSUE: There's no CI build system for Toolforge
  • we tried to do that for the query service UI, but had to use images in the registry, and tried to make nginx, but got a bit stuck making that image for our pipeline. But maybe we could do the nginx within the blubber etc?

Group Stage Right

  • "MediaWiki is the big one"
    • config
    • define expectations or requirements
  • Documentation for how the pipeline works
  • Integration tests vs. deployment?
    • Ability to choose which tests run
    • Selective test running based on patch
    • Selective extensions
  • Speed in general
  • Config injection without rebuilding containers / deploying config
  • Build steps in MediaWiki-land code. Built assets that are fed to ResourceLoader (via WebPack), removing them from the repo (so easier cherry-picks, easier development, easier rebasing).
  • Proper canary deployments - not just blue/green testing.
  • To be able to easily build with different containers (i.e. for different language versions).
  • A group here:
    • Easy possibility to run pipeline locally
    • ...and/or ability to SSH into the container and inspect the situation
    • Ability to run the tests in the same way that CI is running them
    • "If something works on my machine but fails in CI / the pipeline, I need to be able to figure out why"
  • Some form of end-to-end testing environment
  • Temporary environments that can be shared to QA / testers / etc. A link you can send to, for example, a designer to show them you've implemented their design.
    • Ability to create a test environment before merge
  • nginx can be done in blubber -- Q: if it's in the prod image registry, does Ops maintain it?
  • (Discussion of use of pipeline just for testing.)
  • Being able to test using the pipeline several parts of one git repo.
    • Pipeline defs inside a single repo where some aren't published
    • Several pipelines per repo for different purposes
  • Exercise: Divide post-its between "nice to have" and "must have".
  • [Group discussion]
    • GG: Speed came up. Localization. Speed of builds.
    • Lars: All the tests need to be run before it goes into production. But when just *trying* something as a dev, you might want to only test a single aspect, quickly.
    • GL: Average test run time?
      • JF: ::heavy sigh:: about 13 minutes when testing just MW and 30 selected repos. But complicated. Some integration tests actively break each other. Need to test all 200 Wikimedia production repos together but slow and break each other.
    • Piotr: size of build artefact file?
      • … what is the concern? Pulling locally? Too much network bandwidth?
    • R: auditing of package-lock.json
    • GL: would it be ok if the pipeline would build something and *submit* a patch?
    • P: log the node version and commit patch
  • What can we prioritze on the Must Haves board?
    • GG: integration tests?
    • GL: the ability to test more than one repo together?
      • JF: the gate needs to cover all of production
    • Leszek: the whole MediaWiki thing is kinda missing.. WMDE perspective, it'd be nice to have a planned strategy for how to get there. If we just stop at this point we're just ditching the whole pipeline idea.
      • GL: Our plan for the year was to go on with that, but it was removed from the annual plan.
      • JF: we kind of know roughly the steps, but it's a lot of experimental trial and error. Needs resourcing to actually do it, otherwise we're blocked.
        • 1. Build containers (configured with the 'right' extensions)
        • 2. Inject config into them
        • 3. Deploy them [somehow] to a k8s cluster
        • 4. Point prod traffic at the new cluster
    • AA: Identify which parts of wiki adding procedure are related to pipeline [None of them. That's just a special case of config and deployment dependency management.]

Event Timeline

Although I am really not an expert in the internals of how these things work, I am very interested in the following closely-related tasks from the point of view of end users who want to write in new languages:

The reason I'm bringing this up here is that @mark told me a few weeks ago is that that the development of the Deployment Pipeline is a good opportunity to improve the wiki creation process.

I don't think that I should lead the whole thing, but there should be a part properly dedicated to wiki creation, and I can take responsibility for it.

I'm very interested in this topic as I've been involved with the Deployment Pipeline work along with @dduvall (who is not attending tech conf) and @akosiaris (who is attending afaik)

Although I am really not an expert in the internals of how these things work, I am very interested in the following closely-related tasks from the point of view of end users who want to write in new languages:

The reason I'm bringing this up here is that @mark told me a few weeks ago is that that the development of the Deployment Pipeline is a good opportunity to improve the wiki creation process.

I don't think that I should lead the whole thing, but there should be a part properly dedicated to wiki creation, and I can take responsibility for it.

I haven't seen any replies about this. The topic of wiki creation is related to Deployment Pipeline, but it deserves its own time slot for discussion because it has special considerations for the end users that may get lost in a big technical discussion about deployment. After having a brief conversation about this with @debt, I decided to be bold and created a separate task: T235520: Wikimedia Technical Conference 2019 Session: Continuous Delivery/Deployment in Wikimedia: The future of the wiki creation process.

Thanks for consideration :)

debt triaged this task as Medium priority.Oct 22 2019, 6:57 PM

(Programming note)

This session was accepted and will be scheduled.

Notes to the session leader

  • Please continue to scope this session and post the session's goals and main questions into the task description.
    • If your topic is too big for one session, work with your Program Committee contact to break it down even further.
    • Session descriptions need to be completely finalized by November 1, 2019.
  • Please build your session collaboratively!
    • You should consider breakout groups with report-backs, using posters / post-its to visualize thoughts and themes, or any other collaborative meeting method you like.
    • If you need to have any large group discussions they must be planned out, specific, and focused.
    • A brief summary of your session format will need to go in the associated Phabricator task.
    • Some ideas from the old WMF Team Practices Group.
  • If you have any pre-session suggested reading or any specific ideas that you would like your attendees to think about in advance of your session, please state that explicitly in your session’s task.
    • Please put this at the top of your Phabricator task under the label “Pre-reading for all Participants.”

Notes to those interested in attending this session

(or those wanting to engage before the event because they are not attending)

  • If the session leader is asking for feedback, please engage!
  • Please do any pre-session reading that the leader would like you to do.

Images from session on Nov 12, 2019:

must have:

T234641-must have.JPG (3×4 px, 779 KB)

action items:

T234641-action items.JPG (3×4 px, 742 KB)

nice to have:

T234641-nice to have.JPG (3×4 px, 968 KB)

Archived Etherpad for the session:

Wikimedia Technical Conference
Atlanta, GA USA
November 12 - 15, 2019

Session Name / Topic
Continuous Delivery/Deployment in Wikimedia: The Future of the Deployment Pipeline
Session Leader: Tyler + Alexandros; Facilitator: Aubrey; Scribe: Nick, Brennen, James F.
https://phabricator.wikimedia.org/T234641

Session Attendees
Piotr, James, Amir A., Lars, Jeena, Brennen, Nick, Florian, Guisseppe, [name], [name], ...

Notes:

T: Check-in on state of affairs on the deployment pipeline work; walk-through on what exists, and we want your feedback on what we haven't considered.

[1] What it is.

Repeatable way to build, test, promote, release software - currently implemented using Jenkins, Groovy, and a lot of duct-tape®

Insufficient for self-serve, but component for self-serve CI and continuous delivery/deployment 

Goals: Get people familiar with current state. Get feedback

Stats: 15 projects (services) in production from the pipeline (and testing), 4 more (19 total) using for testing

Encouraging, but since almost all are in a single language / environment, a lot remains to be done.

Kask (session management service) is written in Go.

Blubber itself is written in Groovy and tested in its own pipeline.

So, overall, we're using it for projects in Node, Groovy, and Go.

CI Abstractions:

.pipeline/blubber.yaml - requirements, tests, artifacts

.pipeline/config.yaml - how tests run

e.g. run linting stage in parallel with testing

define tests you want to run and 

helm

deployment-charts

A: Based on k8s. Created "YAML Engineering" though...  Instead we've adopted Helm. Template things only once, using if/else/for to make somewhat re-usable

deployment-charts is in gerrit, releases are at https://releases.wikimedia.org/

helm.d contains things like non-secret keys,  API keys to external services, e.g. Google Translations 

The glue between what is being built, and what is being deployed, is not yet created, and that's what we want your input on.

T: Wanted to give an overview of moving a service to the pipeline

[SEE SLIDES OF CODE EXAMPLES]

1. define a test entrypoint-  .pipeline/blubber.yml 

take our base node.js image, run package.json in place, and finally npm test

similar to how travis works, etc.

2. tell the pipeline to test - .pipeline/config.yml

3. Let's add a linting as well  

4. Execution graph - run in parallel, for example. Could run a directed graph of dependencies to build, test, and publish your artifacts.

Today:

Everything in this example can be done right now

Gettig it into CI involves poking Service Ops 

Future: Future of Continuous Integration WG; has picked ArgoCI

Shortcomings / Known unknowns:

Integration tests

Language support

Security embargoes/patches - known issue

MediaWiki support

 What's needed? What are the unknown unknowns?  "You have a project, what's missing in the pipeline to make it happen?"

Group Stage Left

Postits:

LZ: SSH into build

AS: Build images from base images other than those not in WMF registry? -- No.

AS: Can I publish to other docker registries (than wmf?)

Does this affect new wiki creation? -- because someone said it's related, and i don't know how...  Considerations about sharding perhaps?

Does this affect configuration changes such as CommonSettings.php? -- yes, it'l affect how they're done and deployed. Ask James.

Can this help restore daily location updates? -- Yes!

Great documentation

Comprehensibility

Speed

Can the pipelines be triggered from places/event other than gerrit merges?

Is the pattern 1 pipeline per Git repo? (multiple artifacts)??? -- No

How does a non-English speakers get an idea deployed? -- That's more about the social/political decision of whether or not to deploy something.  But kinda relevant because making this process better known.

How do we define a simple way to understand what is needed for a new feature/tool to be deployed? -- a human-readable page, translated, will make it more possible.

New languages and wm-projects: we need a way more simple  path to be live.

I want it to deploy when i merge to master

I want it to deploy my change to a test environment after running tests?

Is pipeline multi branch? or only master or configurable?

How are we going to deploy faster and more efficiently? 

How do we rollback automatically?

Are there one or multiple pipelines per git repo?  -- multiple 

Can we build our own pipeline? -- ... in theory?

Pipeline for stuff to Toolforge would be nice. -- But doesn't use the same images. :-/

AS: At the moment in the production docker registry that are only used in production. localcharts etc. Whereas labs has an image for multiple PHP versions, etc.

Liw: [clarifications?]

AS: either the pipeline has to open up to using those other images, or something else has to open up to using the same process

JH: blubber in the mediawii docker registry ...

How many docker registries are there? -- just 2.  

ISSUE: There's no CI build system for Toolforge 

we tried to do that for the query service UI, but had to use images in the registry, and tried to make nginx, but got a bit stuck making that image for our pipeline. But maybe we could do the nginx within the blubber etc?

Group Stage Right

"MediaWiki is the big one"

config

define expectations or requirements

Documentation for how the pipeline works

Integration tests vs. deployment?

Ability to choose which tests run

Selective test running based on patch

Selective extensions

Speed in general

Config injection without rebuilding containers / deploying config

Build steps in MediaWiki-land code. Built assets that are fed to ResourceLoader (via WebPack), removing them from the repo (so easier cherry-picks, easier development, easier rebasing).

Proper canary deployments - not just blue/green testing.

To be able to easily build with different containers (i.e. for different language versions).

A group here:

Easy possibility to run pipeline locally

...and/or ability to SSH into the container and inspect the situation

Ability to run the tests in the same way that CI is running them

"If something works on my machine but fails in CI / the pipeline, I need to be able to figure out why"

Some form of end-to-end testing environment

Temporary environments that can be shared to QA / testers / etc.  A link you can send to, for example, a designer to show them you've implemented their design.

Ability to create a test environment before merge

nginx can be done in blubber -- Q: if it's in the prod image registry, does Ops maintain it?

(Discussion of use of pipeline just for testing.)

Being able to test using the pipeline several parts of one git repo.

Pipeline defs inside a single repo where some aren't published

Several pipelines per repo for different purposes


Exercise:  Divide post-its between "nice to have" and "must have".


[Group discussion]

GG: Speed came up.  Localization.  Speed of builds.

Lars: All the tests need to be run before it goes into production. But when just *trying* something as a dev, you might want to only test a single aspect, quickly. 

GL: Average test run time?

JF: ::heavy sigh::  about 13 minutes when testing just MW and 30 selected repos. But complicated. Some integration tests actively break each other. Need to test all 200 Wikimedia production repos together but slow and break each other.

Piotr: size of file? [which file?]

R: auditing of package

GL: would it be ok if the pipeline would build something and *submit* a patch?

P: log the node version and commit patch

What can we prioritze on the Must Haves board?

GG: integration tests?

GL: the ability to test more than one repo together?

JF: the gate needs to cover all of production

Leszek: the whole MediaWiki thing is kinda missing.. WMDE perspective, it'd be nice to have a planned strategy for how to get there. If we just stop at this point we're just ditching the whole pipeline idea.

GL: Our plan for the year was to go on with that, but it was removed from the annual plan.

JF: we kind of know roughly the steps, but it's a lot of experimental trial and error. Needs resourcing to actually do it, otherwise we're blocked.

1. Build containers (configured with the 'right' extensions)

2. Inject config into them

3. Deploy them [somehow] to a k8s cluster

4. Point prod traffic at the new cluster

AA: Identify which parts of wiki adding procedure are related to pipeline [None of them. That's just a special case of config and deployment dependency management.]

General instructions
This sheet is for scribes and participants to capture the general discussion of sessions.

Note-taker should go for a more ‘pure transcription’ mode of documentation 

Don’t try to distill a summary of the core details unless you're confident in your speed.

All note-takers should try to work in pairs and pass the lead-role back-and-forth when each speaker changes.. Help to fill in the gaps the other note taker might miss.

When you are the active note-taker in a pair, please write "???" when you missed something in the notes document.

If a session only has one note taker present, feel free to tag a session participant to help take notes and fill in the gaps.

In your notes, please try to highlight the important points (that are usually unspoken):

INFO

ACTION

QUESTION

It’s also good to remind session leaders, facilitators and participants to call out these important points, to aid in note taking.

Sessions might have activities that will result in drawings, diagrams, clustered post it notes, etc

Please tag a session participant to capture these items with a photo and add them to the Phabricator ticket.

Some sessions might have breakout groups which means that there will be simultaneous discussions. 

Session leaders should direct each group to appoint a scribe to take notes (in this document).

At the end of each day, notes and action items will need to be added into the related Phabricator ticket (workboard: https://phabricator.wikimedia.org/project/board/4276/ ) for each session

This can be done by any and all conference attendees.

Additional information about note taking and session facilitation: https://www.mediawiki.org/wiki/Wikimedia_Technical_Conference/2019/NotesandFacilitation

^ same as above with a little formatting still in place.

Wikimedia Technical Conference\
Atlanta, GA USA\
November 12 - 15, 2019\
\
Session Name / Topic\
Continuous Delivery/Deployment in Wikimedia: The Future of the
Deployment Pipeline\
Session Leader: Tyler + Alexandros; Facilitator: Aubrey; Scribe: Nick,
Brennen, James F.\
https://phabricator.wikimedia.org/T234641\
\
Session Attendees\
Piotr, James, Amir A., Lars, Jeena, Brennen, Nick, Florian, Guisseppe,Â
\[name\], \[name\], \...\
\
Notes:\

  • T: Check-in on state of affairs on the deployment pipeline work; walk-through on what exists, and we want your feedback on what we haven\'t considered.
  • \[1\] What it is.
    • Repeatable way to build, test, promote, release software - currently implemented using Jenkins, Groovy, and a lot of duct-tape®
    • Insufficient for self-serve, but component for self-serve CI and continuous delivery/deploymentÂ
  • Goals: Get people familiar with current state. Get feedback
  • Stats: 15 projects (services) in production from the pipeline (and testing), 4 more (19 total) using for testing
  • Encouraging, but since almost all are in a single language / environment, a lot remains to be done.
    • Kask (session management service) is written in Go.
    • Blubber itself is written in Groovy and tested in its own pipeline.
    • So, overall, we\'re using it for projects in Node, Groovy, and Go.
  • CI Abstractions:
    • .pipeline/blubber.yaml - requirements, tests, artifacts
    • .pipeline/config.yaml - how tests run
      • e.g. run linting stage in parallel with testing
      • define tests you want to run andÂ
    • helm
    • deployment-charts
  • A: Based on k8s. Created \"YAML Engineering\" though\... Instead we\'ve adopted Helm. Template things only once, using if/else/for to make somewhat re-usable
  • deployment-charts is in gerrit, releases are at https://releases.wikimedia.org/
  • helm.d contains things like non-secret keys, API keys to external services, e.g. Google TranslationsÂ
  • The glue between what is being built, and what is being deployed, is not yet created, and that\'s what we want your input on.
  • T: Wanted to give an overview of moving a service to the pipeline
  • \[SEE SLIDES OF CODE EXAMPLES\]
  • 1\. define a test entrypoint-Â .pipeline/blubber.ymlÂ
    • take our base node.js image, run package.json in place, and finally npm test
    • similar to how travis works, etc.
  • 2\. tell the pipeline to test - .pipeline/config.yml
  • 3\. Let\'s add a linting as well Â
  • 4\. Execution graph - run in parallel, for example. Could run a directed graph of dependencies to build, test, and publish your artifacts.
  • Today:
  • Everything in this example can be done right now
  • Gettig it into CI involves poking Service OpsÂ
  • Future: Future of Continuous Integration WG; has picked ArgoCI
  • Shortcomings / Known unknowns:
    • Integration tests
    • Language support
    • Security embargoes/patches - known issue
    • MediaWiki support
  •  What\'s needed? What are the unknown unknowns? \"You have a project, what\'s missing in the pipeline to make it happen?\" Â

\
Group Stage Left\

  • Postits:
  • LZ: SSH into build
  • AS: Build images from base images other than those not in WMF registry? \-- No.
  • AS: Can I publish to other docker registries (than wmf?)
  • Does this affect new wiki creation? \-- because someone said it\'s related, and i don\'t know how\... Considerations about sharding perhaps?
  • Does this affect configuration changes such as CommonSettings.php? \-- yes, it\'l affect how they\'re done and deployed. Ask James.
  • Can this help restore daily location updates? \-- Yes!
  • Great documentation
  • Comprehensibility
  • Speed
  • Can the pipelines be triggered from places/event other than gerrit merges?
  • Is the pattern 1 pipeline per Git repo? (multiple artifacts)??? \-- No
  • How does a non-English speakers get an idea deployed? \-- That\'s more about the social/political decision of whether or not to deploy something. But kinda relevant because making this process better known.
    • How do we define a simple way to understand what is needed for a new feature/tool to be deployed? \-- a human-readable page, translated, will make it more possible.
  • New languages and wm-projects: we need a way more simple path to be live.
  • I want it to deploy when i merge to master
  • I want it to deploy my change to a test environment after running tests?
  • Is pipeline multi branch? or only master or configurable?
  • How are we going to deploy faster and more efficiently?Â
  • How do we rollback automatically?
  • Are there one or multiple pipelines per git repo? \-- multipleÂ
    • Can we build our own pipeline? \-- \... in theory?
  • Pipeline for stuff to Toolforge would be nice. \-- But doesn\'t use the same images. :-/
  • AS: At the moment in the production docker registry that are only used in production. localcharts etc. Whereas labs has an image for multiple PHP versions, etc.
    • Liw: \[clarifications?\]
    • AS: either the pipeline has to open up to using those other images, or something else has to open up to using the same process
    • JH: blubber in the mediawii docker registry \...
    • How many docker registries are there? \-- just 2. Â
    • ISSUE: There\'s no CI build system for ToolforgeÂ
  • we tried to do that for the query service UI, but had to use images in the registry, and tried to make nginx, but got a bit stuck making that image for our pipeline. But maybe we could do the nginx within the blubber etc?

\
Group Stage Right\

  • \"MediaWiki is the big one\"
    • config
    • define expectations or requirements
  • Documentation for how the pipeline works
  • Integration tests vs. deployment?
    • Ability to choose which tests run
    • Selective test running based on patch
    • Selective extensions
  • Speed in general
  • Config injection without rebuilding containers / deploying config
  • Build steps in MediaWiki-land code. Built assets that are fed to ResourceLoader (via WebPack), removing them from the repo (so easier cherry-picks, easier development, easier rebasing).
  • Proper canary deployments - not just blue/green testing.
  • To be able to easily build with different containers (i.e. for different language versions).
  • A group here:
    • Easy possibility to run pipeline locally
    • \...and/or ability to SSH into the container and inspect the situation
    • Ability to run the tests in the same way that CI is running them
    • \"If something works on my machine but fails in CI / the pipeline, I need to be able to figure out why\"
  • Some form of end-to-end testing environment
  • Temporary environments that can be shared to QA / testers / etc. A link you can send to, for example, a designer to show them you\'ve implemented their design.
    • Ability to create a test environment before merge
  • nginx can be done in blubber \-- Q: if it\'s in the prod image registry, does Ops maintain it?
  • (Discussion of use of pipeline just for testing.)
  • Being able to test using the pipeline several parts of one git repo.
    • Pipeline defs inside a single repo where some aren\'t published
    • Several pipelines per repo for different purposes

\

  • Exercise:Â Divide post-its between \"nice to have\" and \"must have\".

\

  • \[Group discussion\]
    • GG: Speed came up. Localization. Speed of builds.
    • Lars: All the tests need to be run before it goes into production. But when just \*trying\* something as a dev, you might want to only test a single aspect, quickly.Â
    • GL: Average test run time?
      • JF: ::heavy sigh::Â about 13 minutes when testing just MW and 30 selected repos. But complicated. Some integration tests actively break each other. Need to test all 200 Wikimedia production repos together but slow and break each other.
    • Piotr: size of build artefact file?
      • ... what is the concern? Pulling locally? Too much network bandwidth?
    • R: auditing of package-lock.json
    • GL: would it be ok if the pipeline would build something and \*submit\* a patch?
    • P: log the node version and commit patch
  • What can we prioritze on the Must Haves board?
    • GG: integration tests?
    • GL: the ability to test more than one repo together?
      • JF: the gate needs to cover all of production
    • Leszek: the whole MediaWiki thing is kinda missing.. WMDE perspective, it\'d be nice to have a planned strategy for how to get there. If we just stop at this point we\'re just ditching the whole pipeline idea.
      • GL: Our plan for the year was to go on with that, but it was removed from the annual plan.
      • JF: we kind of know roughly the steps, but it\'s a lot of experimental trial and error. Needs resourcing to actually do it, otherwise we\'re blocked.
        • 1\. Build containers (configured with the \'right\' extensions)
        • 2\. Inject config into them
        • 3\. Deploy them \[somehow\] to a k8s cluster
        • 4\. Point prod traffic at the new cluster
    • AA: Identify which parts of wiki adding procedure are related to pipeline \[None of them. That\'s just a special case of config and deployment dependency management.\]

\
\
\
\
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\--\
\
General instructions\
This sheet is for scribes and participants to capture the general
discussion of sessions. \
\

  1. Note-taker should go for a more 'pure transcription' mode of documentationÂ
    1. Don't try to distill a summary of the core details unless you\'re confident in your speed.
  2. All note-takers should try to work in pairs and pass the lead-role back-and-forth when each speaker changes.. Help to fill in the gaps the other note taker might miss.
    1. When you are the active note-taker in a pair, please write \"???\" when you missed something in the notes document.
    2. If a session only has one note taker present, feel free to tag a session participant to help take notes and fill in the gaps.
  3. In your notes, please try to highlight the important points (that are usually unspoken):
    1. INFO
    2. ACTION
    3. QUESTION
  4. It's also good to remind session leaders, facilitators and participants to call out these important points, to aid in note taking.
  5. Sessions might have activities that will result in drawings, diagrams, clustered post it notes, etc
    1. Please tag a session participant to capture these items with a photo and add them to the Phabricator ticket.
  6. Some sessions might have breakout groups which means that there will be simultaneous discussions.Â
    1. Session leaders should direct each group to appoint a scribe to take notes (in this document).
  7. At the end of each day, notes and action items will need to be added into the related Phabricator ticket (workboard: https://phabricator.wikimedia.org/project/board/4276/ ) for each session
    1. This can be done by any and all conference attendees.
  8. Additional information about note taking and session facilitation: https://www.mediawiki.org/wiki/Wikimedia_Technical_Conference/2019/NotesandFacilitation

Thanks for making this a good session at TechConf this year. Follow-up actions are recorded in a central planning spreadsheet (owned by me) and I'll begin farming them out to responsible parties in January 2020.